├── LICENSE
├── README.md
├── Tutorials
├── .ipynb_checkpoints
│ ├── demo_flourscent_annotation_git-checkpoint.ipynb
│ ├── demo_flourscent_map_annotations_to_cells_git-checkpoint.ipynb
│ ├── demo_visium_annotation_git-checkpoint.ipynb
│ └── demo_visium_map_annotation_to_spots_git-checkpoint.ipynb
├── bokeh_plot.png
├── demo_flourscent_map_annotations_to_cells.ipynb
├── demo_flourscent_tissue_annotation.ipynb
├── demo_visium_annotation_to_spots.ipynb
├── demo_visium_tissue_annotation.ipynb
└── readme.md
├── data
├── tissue_tag_minimal_example_ibex
│ ├── .ipynb_checkpoints
│ │ └── Capture_annotation-checkpoint.PNG
│ ├── Capture_annotation.PNG
│ ├── Sample_05_THY45_Z5_ch0009.jpg
│ ├── Sample_05_THY45_Z5_ch0058.jpg
│ ├── readme.md
│ ├── sample_05_xy.csv
│ └── tissue_annotations
│ │ ├── annotation_tissue.pickle
│ │ ├── annotation_tissue.tif
│ │ ├── annotation_tissue_colors.pickle
│ │ └── annotation_tissue_ppm.pickle
└── tissue_tag_minimal_example_visium
│ ├── Capture_annotations.PNG
│ ├── raw_feature_bc_matrix.h5
│ ├── spatial
│ ├── scalefactors_json.json
│ ├── tissue_hires_image.png
│ ├── tissue_lowres_image.png
│ └── tissue_positions_list.csv
│ └── tissue_annotations
│ ├── annotations.pickle
│ ├── annotations.tif
│ ├── annotations_colors.pickle
│ └── annotations_ppm.pickle
├── setup.py
├── tissueTag_logo.png
└── tissue_tag
├── .ipynb_checkpoints
└── tissue_tag-checkpoint.py
├── __init__.py
├── __pycache__
├── __init__.cpython-39.pyc
└── tissue_tag.cpython-39.pyc
└── tissue_tag.py
/LICENSE:
--------------------------------------------------------------------------------
1 | BSD 3-Clause License
2 |
3 | Copyright (c) 2023, nadavyayon
4 |
5 | Redistribution and use in source and binary forms, with or without
6 | modification, are permitted provided that the following conditions are met:
7 |
8 | 1. Redistributions of source code must retain the above copyright notice, this
9 | list of conditions and the following disclaimer.
10 |
11 | 2. Redistributions in binary form must reproduce the above copyright notice,
12 | this list of conditions and the following disclaimer in the documentation
13 | and/or other materials provided with the distribution.
14 |
15 | 3. Neither the name of the copyright holder nor the names of its
16 | contributors may be used to endorse or promote products derived from
17 | this software without specific prior written permission.
18 |
19 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
20 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
23 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
25 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
26 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
27 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
29 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 | # TissueTag: Jupyter Image Annotator
8 |
9 | TissueTag consists of two major components:
10 | 1) **Jupyter-based image annotation tool**:Utilising the Bokeh Python library (http://www.bokeh.pydata.org) empowered by Datashader (https://datashader.org/index.html) and holoviews (https://holoviews.org/index.html) for pyramidal image rendering. This tool offers a streamlined annotation solution with subpixel resolution for quick interactive annotation of various image types (e.g., brightfield, fluorescence). TissueTag produces labelled images* (e.g., cortex, medulla) and logs all tissue labels, and annotation resolution and colours for reproducibility.
11 |
12 | 2) **Mapping annotations to data**: This components converts the pixel based annotations to an hexagonal grid to allow calculation of distances between morphological structures offering continuous annotation. This calculation is dependent of the grid sampling frequency and the number of grid spots that we define as a spatial neighborhood. This is foundational for calculating a morphological axis (OrganAxis, see tutorials).
13 |
14 | *Note: A labeled image is an integer array where each pixel value (0,1,2,...) corresponds to an annotated structure.*
15 |
16 | **Annotator**: Enables interactive annotation of predefined anatomical objects via convex shape filling while toggling between reference and annotation image.
17 |
18 | We envision this tool as a starting point, so contributions and suggestions are highly appreciated!
19 |
20 | ## Installation
21 |
22 | 1) You need to install either [jupyter-lab](https://jupyter.org/install) or [jupyter-notebook](https://jupyter.org/install)
23 | 2) Install TissueTag using pip:
24 | ```
25 | pip install tissue-tag
26 | ```
27 |
28 | ## How to use
29 | We supply two examples of usage for TissueTag annotations:
30 | 1) visium spatial transcriptomics -
31 | in this example we annotate a postnatal thymus dataset by calling the major anatomical regions in multiple ways (based on marker gene expression or sparse manual annotations) then training a random forest pixel classifier for initial prediction followed by manual corrections [visium annotation tutorial](https://github.com/nadavyayon/TissueTag/blob/main/Tutorials/demo_visium_tissue_annotation.ipynb) and migration of the annotations back to the visium anndata object [mapping annotations to visium](https://github.com/nadavyayon/TissueTag/blob/main/Tutorials/demo_visium_annotation_to_spots.ipynb).
32 | We also show how to calculate a morphological axis (OrganAxis) in 2 ways.
33 |
34 | 2) IBEX single cell multiplex protein imaging -
35 | in this example we annotate a postnatal thymus image by calling the major anatomical regions and training a random forest classifier for initial prediction followed by manual corrections
36 | [IBEX annotation tutorial](https://github.com/nadavyayon/TissueTag/blob/main/Tutorials/demo_flourscent_map_annotations_to_cells.ipynb). Next, we show how one can migrate these annotations to segmented cells and calulcate a morphological axis (OrganAxis) [IBEX mapping annotations tutorial](https://github.com/nadavyayon/TissueTag/blob/main/Tutorials/demo_flourscent_map_annotations_to_cells.ipynb).
37 |
38 | ## Usage on a cluster vs local machine
39 | Bokeh interactive plotting required communication between the notebook instance and the browser.
40 | We have tested the functionality of TissueTag with jupyter lab or jupyter notebooks but have not yet found a solution that works for jupyter hub.
41 | In addition SSH tunnelling is not supported as well but if you are accessing the notebook from outside your institute, VPN access should work fine.
42 |
43 | ## How to cite:
44 | please cite the following preprint - https://www.biorxiv.org/content/10.1101/2023.10.25.562925v1
45 |
46 |
47 |
--------------------------------------------------------------------------------
/Tutorials/bokeh_plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/Tutorials/bokeh_plot.png
--------------------------------------------------------------------------------
/Tutorials/readme.md:
--------------------------------------------------------------------------------
1 |
2 | ### setting up an environment for visium annotation
3 |
4 | 1) create env `conda create -n tissuetag python=3.9` `conda activate tissuetag`
5 |
6 | 2) install tissuetag, scanpy and jupyterlab `pip install tissue-tag` `pip install scanpy` `conda install -c conda-forge jupyterlab`
7 |
8 | 3) install kernel `ipython kernel install --name tissuetag --user` for jupyter
9 |
10 | ## note
11 | # if the plot is not fully displayed hover over the plot, right click, and select "Disable Scrolling for Outputs"
12 |
13 |
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/.ipynb_checkpoints/Capture_annotation-checkpoint.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/.ipynb_checkpoints/Capture_annotation-checkpoint.PNG
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/Capture_annotation.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/Capture_annotation.PNG
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/Sample_05_THY45_Z5_ch0009.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/Sample_05_THY45_Z5_ch0009.jpg
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/Sample_05_THY45_Z5_ch0058.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/Sample_05_THY45_Z5_ch0058.jpg
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/readme.md:
--------------------------------------------------------------------------------
1 | example for IBEX imaging. For this minimal example only 2 channels are present, the origianl data is of 44 plex.
2 |
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue.pickle
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue.tif
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue_colors.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue_colors.pickle
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue_ppm.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_ibex/tissue_annotations/annotation_tissue_ppm.pickle
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/Capture_annotations.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/Capture_annotations.PNG
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/raw_feature_bc_matrix.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/raw_feature_bc_matrix.h5
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/spatial/scalefactors_json.json:
--------------------------------------------------------------------------------
1 | {"tissue_hires_scalef": 0.11462631820265932, "tissue_hires5K_scalef": 0.28656579550664835, "tissue_lowres_scalef": 0.034387895460797804, "fiducial_diameter_fullres": 187.3872840274998, "spot_diameter_fullres": 121.25059554720575}
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/spatial/tissue_hires_image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/spatial/tissue_hires_image.png
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/spatial/tissue_lowres_image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/spatial/tissue_lowres_image.png
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations.pickle
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations.tif
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations_colors.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations_colors.pickle
--------------------------------------------------------------------------------
/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations_ppm.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/data/tissue_tag_minimal_example_visium/tissue_annotations/annotations_ppm.pickle
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup, find_packages
2 |
3 | with open("README.md", "r") as fh:
4 | long_description = fh.read()
5 |
6 | setup(
7 | name='tissue-tag',
8 | version='0.2.2',
9 | packages=find_packages(),
10 | install_requires=[
11 | 'opencv-python',
12 | 'Pillow',
13 | 'bokeh',
14 | 'jupyter-bokeh',
15 | 'matplotlib',
16 | 'seaborn',
17 | 'scipy',
18 | 'scikit-image',
19 | 'tqdm',
20 | 'scikit-learn',
21 | 'holoviews',
22 | 'datashader',
23 | 'panel==1.3.8',
24 | 'jupyterlab'
25 | ],
26 | author='Oren Amsalem, Nadav Yayon, Andrian Yang',
27 | author_email='ny1@sanger.ac.uk',
28 | description="Tissue Tag: jupyter image annotator",
29 | long_description=long_description,
30 | long_description_content_type="text/markdown",
31 | url='https://github.com/nadavyayon/TissueTag',
32 | classifiers=[
33 | 'Programming Language :: Python :: 3',
34 | 'Operating System :: OS Independent',
35 | ],
36 | )
37 |
38 |
--------------------------------------------------------------------------------
/tissueTag_logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/tissueTag_logo.png
--------------------------------------------------------------------------------
/tissue_tag/.ipynb_checkpoints/tissue_tag-checkpoint.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import bokeh
3 | import json
4 | import matplotlib
5 | import matplotlib.font_manager as fm
6 | import matplotlib.pyplot as plt
7 | import numpy as np
8 | import pandas as pd
9 | import pickle
10 | import random
11 | import scipy
12 | import seaborn as sns
13 | import skimage
14 | import warnings
15 | import os
16 | import holoviews as hv
17 | import holoviews.operation.datashader as hd
18 | import panel as pn
19 | from bokeh.models import FreehandDrawTool, PolyDrawTool, PolyEditTool,TabPanel, Tabs, UndoTool
20 | from bokeh.plotting import figure, show
21 | from functools import partial
22 | from io import BytesIO
23 | from packaging import version
24 | from PIL import Image, ImageColor, ImageDraw, ImageEnhance, ImageFilter, ImageFont
25 | from scipy import interpolate
26 | from scipy.spatial import distance
27 | from skimage import data, feature, future, segmentation
28 | from skimage.draw import polygon, disk
29 | from sklearn.ensemble import RandomForestClassifier
30 | from tqdm import tqdm
31 | import numpy as np
32 | import pandas as pd
33 | import skimage.transform
34 | import skimage.draw
35 | import scipy.ndimage
36 | hv.extension('bokeh')
37 |
38 | try:
39 | import scanpy as scread_visium
40 | except ImportError:
41 | print('scanpy is not available')
42 |
43 | Image.MAX_IMAGE_PIXELS = None
44 |
45 | font_path = fm.findfont('DejaVu Sans')
46 |
47 | class CustomFreehandDraw(hv.streams.FreehandDraw):
48 | """
49 | This custom class adds the ability to customise the icon for the FreeHandDraw tool.
50 | """
51 |
52 | def __init__(self, empty_value=None, num_objects=0, styles=None, tooltip=None, icon_colour="black", **params):
53 | self.icon_colour = icon_colour
54 | super().__init__(empty_value, num_objects, styles, tooltip, **params)
55 |
56 | # Logic to update plot without last annotation
57 | class CustomFreehandDrawCallback(hv.plotting.bokeh.callbacks.PolyDrawCallback):
58 | """
59 | This custom class is the corresponding callback for the CustomFreeHandDraw which will render a custom icon for
60 | the FreeHandDraw tool.
61 | """
62 |
63 | def initialize(self, plot_id=None):
64 | plot = self.plot
65 | cds = plot.handles['cds']
66 | glyph = plot.handles['glyph']
67 | stream = self.streams[0]
68 | if stream.styles:
69 | self._create_style_callback(cds, glyph)
70 | kwargs = {}
71 | if stream.tooltip:
72 | kwargs['description'] = stream.tooltip
73 | if stream.empty_value is not None:
74 | kwargs['empty_value'] = stream.empty_value
75 | kwargs['icon'] = create_icon(stream.tooltip[0], stream.icon_colour)
76 | poly_tool = FreehandDrawTool(
77 | num_objects=stream.num_objects,
78 | renderers=[plot.handles['glyph_renderer']],
79 | **kwargs
80 | )
81 | plot.state.tools.append(poly_tool)
82 | self._update_cds_vdims(cds.data)
83 | hv.plotting.bokeh.callbacks.CDSCallback.initialize(self, plot_id)
84 |
85 |
86 | class CustomPolyDraw(hv.streams.PolyDraw):
87 | """
88 | Attaches a FreehandDrawTool and syncs the datasource.
89 | """
90 |
91 | def __init__(self, empty_value=None, drag=True, num_objects=0, show_vertices=False, vertex_style={}, styles={},
92 | tooltip=None, icon_colour="black", **params):
93 | self.icon_colour = icon_colour
94 | super().__init__(empty_value, drag, num_objects, show_vertices, vertex_style, styles, tooltip, **params)
95 |
96 | class CustomPolyDrawCallback(hv.plotting.bokeh.callbacks.GlyphDrawCallback):
97 |
98 | def initialize(self, plot_id=None):
99 | plot = self.plot
100 | stream = self.streams[0]
101 | cds = self.plot.handles['cds']
102 | glyph = self.plot.handles['glyph']
103 | renderers = [plot.handles['glyph_renderer']]
104 | kwargs = {}
105 | if stream.num_objects:
106 | kwargs['num_objects'] = stream.num_objects
107 | if stream.show_vertices:
108 | vertex_style = dict({'size': 10}, **stream.vertex_style)
109 | r1 = plot.state.scatter([], [], **vertex_style)
110 | kwargs['vertex_renderer'] = r1
111 | if stream.styles:
112 | self._create_style_callback(cds, glyph)
113 | if stream.tooltip:
114 | kwargs['description'] = stream.tooltip
115 | if stream.empty_value is not None:
116 | kwargs['empty_value'] = stream.empty_value
117 | kwargs['icon'] = create_icon(stream.tooltip[0], stream.icon_colour)
118 | poly_tool = PolyDrawTool(
119 | drag=all(s.drag for s in self.streams), renderers=renderers,
120 | **kwargs
121 | )
122 | plot.state.tools.append(poly_tool)
123 | self._update_cds_vdims(cds.data)
124 | super().initialize(plot_id)
125 |
126 | # Overload the callback from holoviews to use the custom FreeHandDrawCallback class. Probably not safe.
127 | hv.plotting.bokeh.callbacks.Stream._callbacks['bokeh'].update({
128 | CustomFreehandDraw: CustomFreehandDrawCallback,
129 | CustomPolyDraw: CustomPolyDrawCallback
130 | })
131 |
132 | class SynchronisedFreehandDrawLink(hv.plotting.links.Link):
133 | """
134 | This custom class is a helper designed for creating synchronised FreehandDraw tools.
135 | """
136 |
137 | _requires_target = True
138 |
139 | class SynchronisedFreehandDrawCallback(hv.plotting.bokeh.LinkCallback):
140 | """
141 | This custom class implements the method to synchronise data between two FreehandDraw tools by manually updating
142 | the data_source of the linked tools.
143 | """
144 |
145 | source_model = "cds"
146 | source_handles = ["plot"]
147 | target_model = "cds"
148 | target_handles = ["plot"]
149 | on_source_changes = ["data"]
150 | on_target_changes = ["data"]
151 |
152 | source_code = """
153 | target_cds.data = source_cds.data
154 | target_cds.change.emit()
155 | """
156 |
157 | target_code = """
158 | source_cds.data = target_cds.data
159 | source_cds.change.emit()
160 | """
161 |
162 | # Register the callback class to the link class
163 | SynchronisedFreehandDrawLink.register_callback('bokeh', SynchronisedFreehandDrawCallback)
164 |
165 | def to_base64(img):
166 | buffered = BytesIO()
167 | img.save(buffered, format="png")
168 | data = base64.b64encode(buffered.getvalue()).decode('utf-8')
169 | return f'data:image/png;base64,{data}'
170 |
171 | def create_icon(name, color):
172 | font_size = 25
173 | img = Image.new('RGBA', (30, 30), (255, 0, 0, 0))
174 | ImageDraw.Draw(img).text((5, 2), name, fill=tuple((np.array(matplotlib.colors.to_rgb(color)) * 255).astype(int)),
175 | font=ImageFont.truetype(font_path, font_size))
176 | if version.parse(bokeh.__version__) < version.parse("3.1.0"):
177 | img = to_base64(img)
178 | return img
179 |
180 | def read_image(
181 | path,
182 | ppm_image=None,
183 | ppm_out=1,
184 | contrast_factor=1,
185 | background_image_path=None,
186 | plot=True,
187 | ):
188 | """
189 | Reads an H&E or fluorescent image and returns the image with optional enhancements.
190 |
191 | Parameters
192 | ----------
193 | path : str
194 | Path to the image. The image must be in a format supported by Pillow. Refer to
195 | https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html for the list
196 | of supported formats.
197 | ppm_image : float, optional
198 | Pixels per microns of the input image. If not provided, this will be extracted from the image
199 | metadata with info['resolution']. If the metadata is not present, an error will be thrown.
200 | ppm_out : float, optional
201 | Pixels per microns of the output image. Defaults to 1.
202 | contrast_factor : int, optional
203 | Factor to adjust contrast for output image, typically between 2-5. Defaults to 1.
204 | background_image_path : str, optional
205 | Path to a background image. If provided, this image and the input image are combined
206 | to create a virtual H&E (vH&E). If not provided, vH&E will not be performed.
207 | plot : boolean, optional
208 | if to plot the loaded image. default is True.
209 |
210 | Returns
211 | -------
212 | numpy.ndarray
213 | The processed image.
214 | float
215 | The pixels per microns of the input image.
216 | float
217 | The pixels per microns of the output image.
218 |
219 | Raises
220 | ------
221 | ValueError
222 | If 'ppm_image' is not provided and cannot be extracted from the image metadata.
223 | """
224 |
225 | im = Image.open(path)
226 | if not(ppm_image):
227 | try:
228 | ppm_image = im.info['resolution'][0]
229 | print('found ppm in image metadata!, its - '+str(ppm_image))
230 | except:
231 | print('could not find ppm in image metadata, please provide ppm value')
232 | width, height = im.size
233 | newsize = (int(width/ppm_image*ppm_out), int(height/ppm_image*ppm_out))
234 | # resize
235 | im = im.resize(newsize,Image.Resampling.LANCZOS)
236 | im = im.convert("RGBA")
237 | #increase contrast
238 | enhancer = ImageEnhance.Contrast(im)
239 | factor = contrast_factor
240 | im = enhancer.enhance(factor*factor)
241 |
242 | if background_image_path:
243 | im2 = Image.open(background_image_path)
244 | # resize
245 | im2 = im2.resize(newsize,Image.Resampling.LANCZOS)
246 | im2 = im2.convert("RGBA")
247 | #increase contrast
248 | enhancer = ImageEnhance.Contrast(im2)
249 | factor = contrast_factor
250 | im2 = enhancer.enhance(factor*factor)
251 | # virtual H&E
252 | # im2 = im2.convert("RGBA")
253 | im = simonson_vHE(np.array(im).astype('uint8'),np.array(im2).astype('uint8'))
254 | if plot:
255 | plt.figure(dpi=100)
256 | plt.imshow(im,origin='lower')
257 | plt.show()
258 |
259 | return np.array(im),ppm_image,ppm_out
260 |
261 | def read_visium(
262 | spaceranger_dir_path,
263 | use_resolution='hires',
264 | res_in_ppm = None,
265 | fullres_path = None,
266 | header = None,
267 | plot = True,
268 | in_tissue = True,
269 | ):
270 | """
271 | Reads 10X Visium image data from SpaceRanger (v1.3.0).
272 |
273 | Parameters
274 | ----------
275 | spaceranger_dir_path : str
276 | Path to the 10X SpaceRanger output folder.
277 | use_resolution : {'hires', 'lowres', 'fullres'}, optional
278 | Desired image resolution. 'fullres' refers to the original image that was sent to SpaceRanger
279 | along with sequencing data. If 'fullres' is specified, `fullres_path` must also be provided.
280 | Defaults to 'hires'.
281 | res_in_ppm : float, optional
282 | Used when working with full resolution images to resize the full image to a specified pixels per
283 | microns.
284 | fullres_path : str, optional
285 | Path to the full resolution image used for mapping. This must be specified if `use_resolution` is
286 | set to 'fullres'.
287 | header : int, optional (defa
288 | newer SpaceRanger could need this to be set as 0. Default is None.
289 | plot : Boolean
290 | if to plot the visium object to scale
291 | in_tissue : Boolean
292 | if to take only tissue spots or all visium spots
293 |
294 | Returns
295 | -------
296 | numpy.ndarray
297 | The processed image.
298 | float
299 | The pixels per microns of the image.
300 | pandas.DataFrame
301 | A DataFrame containing information on the tissue positions.
302 |
303 | Raises
304 | ------
305 | AssertionError
306 | If 'use_resolution' is set to 'fullres' but 'fullres_path' is not specified.
307 | """
308 | plt.figure(figsize=[12,12])
309 |
310 | spotsize = 55 #um spot size of a visium spot
311 |
312 | scalef = json.load(open(spaceranger_dir_path+'spatial/scalefactors_json.json','r'))
313 | if use_resolution=='fullres':
314 | assert fullres_path is not None, 'if use_resolution=="fullres" fullres_path has to be specified'
315 |
316 | df = pd.read_csv(spaceranger_dir_path+'spatial/tissue_positions_list.csv',header=header)
317 | if header==0:
318 | df = df.set_index(keys='barcode')
319 | if in_tissue:
320 | df = df[df['in_tissue']>0] # in tissue
321 | # turn df to mu
322 | fullres_ppm = scalef['spot_diameter_fullres']/spotsize
323 | df['pxl_row_in_fullres'] = df['pxl_row_in_fullres']/fullres_ppm
324 | df['pxl_col_in_fullres'] = df['pxl_col_in_fullres']/fullres_ppm
325 | else:
326 | df = df.set_index(keys=0)
327 | if in_tissue:
328 | df = df[df[1]>0] # in tissue
329 | # turn df to mu
330 | fullres_ppm = scalef['spot_diameter_fullres']/spotsize
331 | df['pxl_row_in_fullres'] = df[4]/fullres_ppm
332 | df['pxl_col_in_fullres'] = df[5]/fullres_ppm
333 |
334 |
335 | if use_resolution=='fullres':
336 | im = Image.open(fullres_path)
337 | ppm = fullres_ppm
338 | else:
339 | im = Image.open(spaceranger_dir_path+'spatial/tissue_'+use_resolution+'_image.png')
340 | ppm = scalef['spot_diameter_fullres']*scalef['tissue_'+use_resolution+'_scalef']/spotsize
341 |
342 |
343 | if res_in_ppm:
344 | width, height = im.size
345 | newsize = (int(width*res_in_ppm/ppm), int(height*res_in_ppm/ppm))
346 | im = im.resize(newsize,Image.Resampling.LANCZOS)
347 | ppm = res_in_ppm
348 |
349 | # translate from mu to pixel
350 | df['pxl_col'] = df['pxl_col_in_fullres']*ppm
351 | df['pxl_row'] = df['pxl_row_in_fullres']*ppm
352 |
353 |
354 | im = im.convert("RGBA")
355 |
356 | if plot:
357 | coordinates = np.vstack((df['pxl_col'],df['pxl_row']))
358 | plt.imshow(im,origin='lower')
359 | plt.plot(coordinates[0,:],coordinates[1,:],'.')
360 | plt.title( 'ppm - '+str(ppm))
361 |
362 | return np.array(im), ppm, df
363 |
364 |
365 | def scribbler(imarray, anno_dict, plot_size=1024, use_datashader=False):
366 | """
367 | Creates interactive scribble line annotations with Holoviews.
368 |
369 | Parameters
370 | ----------
371 | imarray : np.array
372 | Image in numpy array format.
373 | anno_dict : dict
374 | Dictionary of structures to annotate and colors for the structures.
375 | plot_size : int, optional
376 | Used to adjust the plotting area size. Default is 1024.
377 | use_datashader : Boolean, optional
378 | If we should use datashader for rendering the image. Recommended for high resolution image. Default is False.
379 |
380 | Returns
381 | -------
382 | holoviews.core.overlay.Overlay
383 | A Holoviews overlay composed of the image and annotation layers.
384 | dict
385 | Dictionary of renderers for each annotation.
386 | """
387 |
388 | imarray_c = imarray.astype('uint8').copy()
389 | imarray_c = np.flip(imarray_c, 0)
390 |
391 | # Create a new holoview image
392 | img = hv.RGB(imarray_c, bounds=(0, 0, imarray_c.shape[1], imarray_c.shape[0]))
393 | if use_datashader:
394 | img = hd.regrid(img)
395 | ds_img = img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
396 |
397 | # Create the plot list object
398 | plot_list = [ds_img]
399 |
400 | # Render plot using bokeh backend
401 | p = hv.render(img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size)),
402 | backend="bokeh")
403 |
404 | render_dict = {}
405 | path_dict = {}
406 | for key in anno_dict.keys():
407 | path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
408 | render_dict[key] = CustomFreehandDraw(source=path_dict[key], num_objects=200, tooltip=key,
409 | icon_colour=anno_dict[key])
410 | plot_list.append(path_dict[key])
411 |
412 | # Create the plot from the plot list
413 | p = hd.Overlay(plot_list).collate()
414 |
415 | return p, render_dict
416 |
417 |
418 | def annotator(imarray, labels, anno_dict, plot_size=1024,invert_y=False,use_datashader=False,alpha=0.7):
419 | """
420 | Interactive annotation tool with line annotations using Panel tabs for toggling between morphology and annotation.
421 |
422 | Parameters
423 | ----------
424 | imarray: np.array
425 | Image in numpy array format.
426 | labels: np.array
427 | Label image in numpy array format.
428 | anno_dict: dict
429 | Dictionary of structures to annotate and colors for the structures.
430 | plot_size: int, default=1024
431 | Figure size for plotting.
432 | invert_y :boolean
433 | invert plot along y axis
434 | use_datashader : Boolean, optional
435 | If we should use datashader for rendering the image. Recommended for high resolution image. Default is False.
436 | alpha
437 | blending extent of "Annotation" tab
438 |
439 | Returns
440 | -------
441 | Panel Tabs object
442 | A Tabs object containing the annotation and image panels.
443 | dict
444 | Dictionary of Bokeh renderers for each annotation.
445 | """
446 | import logging
447 | logging.getLogger('bokeh.core.validation.check').setLevel(logging.ERROR)
448 |
449 | # convert label image to rgb for annotation
450 | labels_rgb = rgb_from_labels(labels, colors=list(anno_dict.values()))
451 | annotation = overlay_labels(imarray,labels_rgb,alpha=alpha,show=False)
452 |
453 | annotation_c = annotation.astype('uint8').copy()
454 | if not invert_y:
455 | annotation_c = np.flip(annotation_c, 0)
456 |
457 | imarray_c = imarray.astype('uint8').copy()
458 | if not invert_y:
459 | imarray_c = np.flip(imarray_c, 0)
460 |
461 | # Create new holoview images
462 | anno = hv.RGB(annotation_c, bounds=(0, 0, annotation_c.shape[1], annotation_c.shape[0]))
463 | if use_datashader:
464 | anno = hd.regrid(anno)
465 | ds_anno = anno.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
466 |
467 | img = hv.RGB(imarray_c, bounds=(0, 0, imarray_c.shape[1], imarray_c.shape[0]))
468 | if use_datashader:
469 | img = hd.regrid(img)
470 | ds_img = img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
471 |
472 | anno_tab_plot_list = [ds_anno]
473 | img_tab_plot_list = [ds_img]
474 |
475 | render_dict = {}
476 | img_render_dict = {}
477 | path_dict = {}
478 | img_path_dict = {}
479 | for key in anno_dict.keys():
480 | path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
481 | render_dict[key] = CustomFreehandDraw(source=path_dict[key], num_objects=200, tooltip=key,
482 | icon_colour=anno_dict[key])
483 |
484 | img_path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
485 | img_render_dict[key] = CustomFreehandDraw(source=img_path_dict[key], num_objects=200, tooltip=key,
486 | icon_colour=anno_dict[key])
487 |
488 | SynchronisedFreehandDrawLink(path_dict[key], img_path_dict[key])
489 | anno_tab_plot_list.append(path_dict[key])
490 | img_tab_plot_list.append(img_path_dict[key])
491 |
492 | # Create the tabbed view
493 | p = pn.Tabs(("Annotation", pn.panel(hd.Overlay(anno_tab_plot_list).collate())),
494 | ("Image", pn.panel(hd.Overlay(img_tab_plot_list).collate())), dynamic=False)
495 | return p, render_dict
496 |
497 |
498 |
499 | def complete_pixel_gaps(x,y):
500 | """
501 | Function to complete pixel gaps in a given x, y coordinates
502 |
503 | Parameters:
504 | x : list
505 | list of x coordinates
506 | y : list
507 | list of y coordinates
508 |
509 | Returns:
510 | new_x, new_y : tuple
511 | tuple of completed x and y coordinates
512 | """
513 |
514 | new_x_1 = []
515 | new_x_2 = []
516 | # iterate over x coordinate values
517 | for idx, px in enumerate(x[:-1]):
518 | # interpolate between each pair of x points
519 | interpolation = interpolate.interp1d(x[idx:idx+2], y[idx:idx+2])
520 | interpolated_x_1 = np.linspace(x[idx], x[idx+1], num=np.abs(x[idx+1] - x[idx] + 1))
521 | interpolated_x_2 = interpolation(interpolated_x_1).astype(int)
522 | # add interpolated values to new x lists
523 | new_x_1 += list(interpolated_x_1)
524 | new_x_2 += list(interpolated_x_2)
525 |
526 | new_y_1 = []
527 | new_y_2 = []
528 | # iterate over y coordinate values
529 | for idx, py in enumerate(y[:-1]):
530 | # interpolate between each pair of y points
531 | interpolation = interpolate.interp1d(y[idx:idx+2], x[idx:idx+2])
532 | interpolated_y_1 = np.linspace(y[idx], y[idx+1], num=np.abs(y[idx+1] - y[idx] + 1))
533 | interpolated_y_2 = interpolation(interpolated_y_1).astype(int)
534 | # add interpolated values to new y lists
535 | new_y_1 += list(interpolated_y_1)
536 | new_y_2 += list(interpolated_y_2)
537 |
538 | # combine x and y lists
539 | new_x = new_x_1 + new_y_2
540 | new_y = new_x_2 + new_y_1
541 |
542 | return new_x, new_y
543 |
544 |
545 |
546 | def scribble_to_labels(imarray, render_dict, line_width=10):
547 | """
548 | Extract scribbles to a label image.
549 |
550 | Parameters
551 | ----------
552 | imarray: np.array
553 | Image in numpy array format used to calculate the label image size.
554 | render_dict: dict
555 | Bokeh object carrying annotations.
556 | line_width: int
557 | Width of the line labels.
558 |
559 | Returns
560 | -------
561 | np.array
562 | Annotation image.
563 | """
564 | annotations = {}
565 | training_labels = np.zeros((imarray.shape[1], imarray.shape[0]), dtype=np.uint8)
566 |
567 | for idx, a in enumerate(render_dict.keys()):
568 | xs = []
569 | ys = []
570 | annotations[a] = []
571 |
572 | for o in range(len(render_dict[a].data['xs'])):
573 | xt, yt = complete_pixel_gaps(
574 | np.array(render_dict[a].data['xs'][o]).astype(int),
575 | np.array(render_dict[a].data['ys'][o]).astype(int)
576 | )
577 | xs.extend(xt)
578 | ys.extend(yt)
579 | annotations[a].append(np.vstack([
580 | np.array(render_dict[a].data['xs'][o]).astype(int),
581 | np.array(render_dict[a].data['ys'][o]).astype(int)
582 | ]))
583 |
584 | xs = np.array(xs)
585 | ys = np.array(ys)
586 | inshape = (xs > 0) & (xs < imarray.shape[1]) & (ys > 0) & (ys < imarray.shape[0])
587 | xs = xs[inshape]
588 | ys = ys[inshape]
589 |
590 | training_labels[np.floor(xs).astype(int), np.floor(ys).astype(int)] = idx + 1
591 |
592 | training_labels = training_labels.transpose()
593 | return skimage.segmentation.expand_labels(training_labels, distance=line_width / 2)
594 |
595 |
596 | def rgb_from_labels(labelimage, colors):
597 | """
598 | Helper function to plot from label images.
599 |
600 | Parameters
601 | ----------
602 | labelimage: np.array
603 | Label image with pixel values corresponding to labels.
604 | colors: list
605 | Colors corresponding to pixel values for plotting.
606 |
607 | Returns
608 | -------
609 | np.array
610 | Annotation image.
611 | """
612 | labelimage_rgb = np.zeros((labelimage.shape[0], labelimage.shape[1], 4))
613 |
614 | for c in range(len(colors)):
615 | color = ImageColor.getcolor(colors[c], "RGB")
616 | labelimage_rgb[labelimage == c + 1, 0:3] = np.array(color)
617 |
618 | labelimage_rgb[:, :, 3] = 255
619 | return labelimage_rgb.astype('uint8')
620 |
621 |
622 |
623 | def sk_rf_classifier(im, training_labels,anno_dict,plot=True):
624 | """
625 | A simple random forest pixel classifier from sklearn.
626 |
627 | Parameters
628 | ----------
629 | im : array
630 | The actual image to predict the labels from, should be the same size as training_labels.
631 | training_labels : array
632 | Label image with pixel values corresponding to labels.
633 | anno_dict: dict
634 | Dictionary of structures to annotate and colors for the structures.
635 | plot : boolean, optional
636 | if to plot the loaded image. default is True.
637 |
638 | Returns
639 | -------
640 | array
641 | Predicted label map.
642 | """
643 |
644 | sigma_min = 1
645 | sigma_max = 16
646 | features_func = partial(feature.multiscale_basic_features,
647 | intensity=True, edges=False, texture=~True,
648 | sigma_min=sigma_min, sigma_max=sigma_max, channel_axis=-1)
649 |
650 | features = features_func(im)
651 | clf = RandomForestClassifier(n_estimators=50, n_jobs=-1,
652 | max_depth=10, max_samples=0.05)
653 | clf = future.fit_segmenter(training_labels, features, clf)
654 |
655 | labels = future.predict_segmenter(features, clf)
656 |
657 | if plot:
658 | labels_rgb = rgb_from_labels(labels,colors=list(anno_dict.values()))
659 | overlay_labels(im,labels_rgb,alpha=0.7)
660 |
661 | return labels
662 |
663 | def overlay_labels(im1, im2, alpha=0.8, show=True):
664 | """
665 | Helper function to merge 2 images.
666 |
667 | Parameters
668 | ----------
669 | im1 : array
670 | 1st image.
671 | im2 : array
672 | 2nd image.
673 | alpha : float, optional
674 | Blending factor, by default 0.8.
675 | show : bool, optional
676 | If to show the merged plot or not, by default True.
677 |
678 | Returns
679 | -------
680 | array
681 | The merged image.
682 | """
683 |
684 | #generate overlay image
685 | plt.rcParams["figure.figsize"] = [10, 10]
686 | plt.rcParams["figure.dpi"] = 100
687 | out_img = np.zeros(im1.shape,dtype=im1.dtype)
688 | out_img[:,:,:] = (alpha * im1[:,:,:]) + ((1-alpha) * im2[:,:,:])
689 | out_img[:,:,3] = 255
690 | if show:
691 | plt.imshow(out_img,origin='lower')
692 | return out_img
693 |
694 | def update_annotator(imarray, labels, anno_dict, render_dict, alpha=0.5,plot=True):
695 | """
696 | Updates annotations and generates overlay (out_img) and the label image (corrected_labels).
697 |
698 | Parameters
699 | ----------
700 | imarray : numpy.ndarray
701 | Image in numpy array format.
702 | labels : numpy.ndarray
703 | Label image in numpy array format.
704 | anno_dict : dict
705 | Dictionary of structures to annotate and colors for the structures.
706 | render_dict : dict
707 | Bokeh data container.
708 | alpha : float
709 | Blending factor.
710 | plot
711 | if the plot the updated annotations. default is True.
712 |
713 | Returns
714 | -------
715 | Returns corrected labels as numpy array.
716 | """
717 |
718 | corrected_labels = labels.copy()
719 | for idx, a in enumerate(render_dict.keys()):
720 | if render_dict[a].data['xs']:
721 | print(a)
722 | for o in range(len(render_dict[a].data['xs'])):
723 | x = np.array(render_dict[a].data['xs'][o]).astype(int)
724 | y = np.array(render_dict[a].data['ys'][o]).astype(int)
725 | rr, cc = polygon(y, x)
726 | inshape = np.where(np.array(labels.shape[0] > rr) & np.array(0 < rr) & np.array(labels.shape[1] > cc) & np.array(0 < cc))[0]
727 | corrected_labels[rr[inshape], cc[inshape]] = idx + 1
728 | if plot:
729 | rgb = rgb_from_labels(corrected_labels, list(anno_dict.values()))
730 | out_img = overlay_labels(imarray, rgb, alpha=alpha)
731 |
732 | return corrected_labels
733 |
734 |
735 | def rescale_image(label_image, target_size):
736 | """
737 | Rescales label image to original image size.
738 |
739 | Parameters
740 | ----------
741 | label_image : numpy.ndarray
742 | Labeled image.
743 | target_size : tuple
744 | Final dimensions.
745 |
746 | Returns
747 | -------
748 | numpy.ndarray
749 | Rescaled image.
750 | """
751 | imP = Image.fromarray(label_image)
752 | newsize = (target_size[0], target_size[1])
753 | return np.array(imP.resize(newsize))
754 |
755 |
756 | def save_annotation(folder, label_image, file_name, anno_names, anno_colors, ppm):
757 | """
758 | Saves the annotated image as .tif and in addition saves the translation
759 | from annotations to labels in a pickle file.
760 |
761 | Parameters
762 | ----------
763 | folder : str
764 | Folder where to save the annotations.
765 | label_image : numpy.ndarray
766 | Labeled image.
767 | file_name : str
768 | Name for tif image and pickle.
769 | anno_names : list
770 | Names of annotated objects.
771 | anno_colors : list
772 | Colors of annotated objects.
773 | ppm : float
774 | Pixels per microns.
775 | """
776 |
777 | label_image = Image.fromarray(label_image)
778 | label_image.save(folder + file_name + '.tif')
779 | with open(folder + file_name + '.pickle', 'wb') as handle:
780 | pickle.dump(dict(zip(range(1, len(anno_names) + 1), anno_names)), handle, protocol=pickle.HIGHEST_PROTOCOL)
781 | with open(folder + file_name + '_colors.pickle', 'wb') as handle:
782 | pickle.dump(dict(zip(anno_names, anno_colors)), handle, protocol=pickle.HIGHEST_PROTOCOL)
783 | with open(folder + file_name + '_ppm.pickle', 'wb') as handle:
784 | pickle.dump({'ppm': ppm}, handle, protocol=pickle.HIGHEST_PROTOCOL)
785 |
786 |
787 | def load_annotation(folder, file_name, load_colors=False):
788 | """
789 | Loads the annotated image from a .tif file and the translation from annotations
790 | to labels from a pickle file.
791 |
792 | Parameters
793 | ----------
794 | folder : str
795 | Folder path for annotations.
796 | file_name : str
797 | Name for tif image and pickle without extensions.
798 | load_colors : bool, optional
799 | If True, get original colors used for annotations. Default is False.
800 |
801 | Returns
802 | -------
803 | tuple
804 | Returns annotation image, annotation order, pixels per microns, and annotation color.
805 | If `load_colors` is False, annotation color is not returned.
806 | """
807 |
808 | imP = Image.open(folder + file_name + '.tif')
809 |
810 | ppm = imP.info['resolution'][0]
811 | im = np.array(imP)
812 |
813 | print(f'loaded annotation image - {file_name} size - {str(im.shape)}')
814 | with open(folder + file_name + '.pickle', 'rb') as handle:
815 | anno_order = pickle.load(handle)
816 | print('loaded annotations')
817 | print(anno_order)
818 | with open(folder + file_name + '_ppm.pickle', 'rb') as handle:
819 | ppm = pickle.load(handle)
820 | print('loaded ppm')
821 | print(ppm)
822 |
823 | if load_colors:
824 | with open(folder + file_name + '_colors.pickle', 'rb') as handle:
825 | anno_color = pickle.load(handle)
826 | print('loaded color annotations')
827 | print(anno_color)
828 | return im, anno_order, ppm['ppm'], anno_color
829 |
830 | else:
831 | return im, anno_order, ppm['ppm']
832 |
833 |
834 |
835 | def simonson_vHE(dapi_image, eosin_image):
836 | """
837 | Create virtual H&E images using DAPI and eosin images.
838 | from the developer website:
839 | The method is applied to data on a multiplex/Keyence microscope to produce virtual H&E images
840 | using fluorescence data. If you find it useful, consider citing the relevant article:
841 | Creating virtual H&E images using samples imaged on a commercial multiplex platform
842 | Paul D. Simonson, Xiaobing Ren, Jonathan R. Fromm
843 | doi: https://doi.org/10.1101/2021.02.05.21249150
844 |
845 | Parameters
846 | ----------
847 | dapi_image : ndarray
848 | DAPI image data.
849 | eosin_image : ndarray
850 | Eosin image data.
851 |
852 | Returns
853 | -------
854 | ndarray
855 | Virtual H&E image.
856 | """
857 |
858 | def createVirtualHE(dapi_image, eosin_image, k1, k2, background, beta_DAPI, beta_eosin):
859 | new_image = np.empty([dapi_image.shape[0], dapi_image.shape[1], 4])
860 | new_image[:,:,0] = background[0] + (1 - background[0]) * np.exp(- k1 * beta_DAPI[0] * dapi_image - k2 * beta_eosin[0] * eosin_image)
861 | new_image[:,:,1] = background[1] + (1 - background[1]) * np.exp(- k1 * beta_DAPI[1] * dapi_image - k2 * beta_eosin[1] * eosin_image)
862 | new_image[:,:,2] = background[2] + (1 - background[2]) * np.exp(- k1 * beta_DAPI[2] * dapi_image - k2 * beta_eosin[2] * eosin_image)
863 | new_image[:,:,3] = 1
864 | new_image = new_image*255
865 | return new_image.astype('uint8')
866 |
867 | k1 = k2 = 0.001
868 |
869 | background = [0.25, 0.25, 0.25]
870 |
871 | beta_DAPI = [9.147, 6.9215, 1.0]
872 |
873 | beta_eosin = [0.1, 15.8, 0.3]
874 |
875 | dapi_image = dapi_image[:,:,0]+dapi_image[:,:,1]
876 | eosin_image = eosin_image[:,:,0]+eosin_image[:,:,1]
877 |
878 | print(dapi_image.shape)
879 | return createVirtualHE(dapi_image, eosin_image, k1, k2, background, beta_DAPI, beta_eosin)
880 |
881 | def generate_hires_grid(
882 | im,
883 | spot_diameter,
884 | pixels_per_micron,
885 | ):
886 | """
887 | creates an hexagonal grid of a specified size and density
888 |
889 | Parameters
890 | ----------
891 | im
892 | image to fit the gri on (mostly for dimentions)
893 | spot_diameter
894 | in microns - determines the spot size and thus the density of the grid
895 | pixels_per_micron
896 | image resolution
897 |
898 | """
899 | helper = spot_diameter * pixels_per_micron
900 | X1 = np.linspace(helper, im.shape[0] - helper, round(im.shape[0] / helper))
901 | Y1 = np.linspace(helper, im.shape[1] - 2 * helper, round(im.shape[1] / (2 * helper)))
902 | X2 = X1 + spot_diameter * pixels_per_micron / 2
903 | Y2 = Y1 + helper
904 | Gx1, Gy1 = np.meshgrid(X1, Y1)
905 | Gx2, Gy2 = np.meshgrid(X2, Y2)
906 | positions1 = np.vstack([Gy1.ravel(), Gx1.ravel()])
907 | positions2 = np.vstack([Gy2.ravel(), Gx2.ravel()])
908 | positions = np.hstack([positions1, positions2])
909 |
910 | return positions
911 |
912 |
913 |
914 | def create_disk_kernel(radius, shape):
915 | rr, cc = skimage.draw.disk((radius, radius), radius, shape=shape)
916 | kernel = np.zeros(shape, dtype=bool)
917 | kernel[rr, cc] = True
918 | return kernel
919 |
920 | def apply_median_filter(image, kernel):
921 | return scipy.ndimage.median_filter(image, footprint=kernel)
922 |
923 | def grid_anno(
924 | im,
925 | annotation_image_list,
926 | annotation_image_names,
927 | annotation_label_list,
928 | spot_diameter,
929 | ppm_in,
930 | ppm_out,
931 | ):
932 | print(f'Generating grid with spot size - {spot_diameter}, with resolution of - {ppm_in} ppm')
933 |
934 | positions = generate_hires_grid(im, spot_diameter, ppm_in).T # Transpose for correct orientation
935 | radius = spot_diameter // 4
936 | kernel = create_disk_kernel(radius, (2*radius + 1, 2*radius + 1))
937 |
938 | df = pd.DataFrame(positions, columns=['x', 'y'])
939 | df['index'] = df.index
940 |
941 | for idx0, anno in enumerate(annotation_image_list):
942 | anno_orig = skimage.transform.resize(anno, im.shape[:2], preserve_range=True).astype('uint8')
943 | filtered_image = apply_median_filter(anno_orig, kernel)
944 |
945 | median_values = [filtered_image[int(point[1]), int(point[0])] for point in positions]
946 | anno_dict = {idx: annotation_label_list[idx0].get(val, "Unknown") for idx, val in enumerate(median_values)}
947 | number_dict = {idx: val for idx, val in enumerate(median_values)}
948 |
949 | df[annotation_image_names[idx0]] = list(anno_dict.values())
950 | df[annotation_image_names[idx0] + '_number'] = list(number_dict.values())
951 |
952 | df['x'] = df['x'] * ppm_out / ppm_in
953 | df['y'] = df['y'] * ppm_out / ppm_in
954 | df.set_index('index', inplace=True)
955 |
956 | return df
957 |
958 |
959 |
960 |
961 | def dist2cluster_fast(df, annotation, KNN=5, logscale=False):
962 | from scipy.spatial import cKDTree
963 |
964 | print('calculating distance matrix with cKDTree')
965 |
966 | points = np.vstack([df['x'],df['y']]).T
967 | categories = np.unique(df[annotation])
968 |
969 | Dist2ClusterAll = {c: np.zeros(df.shape[0]) for c in categories}
970 |
971 | for idx, c in enumerate(categories):
972 | indextmp = df[annotation] == c
973 | if np.sum(indextmp) > KNN:
974 | print(c)
975 | cluster_points = points[indextmp]
976 | tree = cKDTree(cluster_points)
977 | # Get KNN nearest neighbors for each point
978 | distances, _ = tree.query(points, k=KNN)
979 | # Store the mean distance for each point to the current category
980 | if KNN == 1:
981 | Dist2ClusterAll[c] = distances # No need to take mean if only one neighbor
982 | else:
983 | Dist2ClusterAll[c] = np.mean(distances, axis=1)
984 |
985 | for c in categories:
986 | if logscale:
987 | df["L2_dist_log10_"+annotation+'_'+c] = np.log10(Dist2ClusterAll[c])
988 | else:
989 | df["L2_dist_"+annotation+'_'+c] = Dist2ClusterAll[c]
990 |
991 | return Dist2ClusterAll
992 |
993 |
994 | def anno_to_cells(df_cells, x_col, y_col, df_grid, annotation='annotations', plot=True):
995 | """
996 | Maps tissue annotations to segmented cells by nearest neighbors.
997 |
998 | Parameters
999 | ----------
1000 | df_cells : pandas.DataFrame
1001 | Dataframe with cell data.
1002 | x_col : str
1003 | Name of column with x coordinates in df_cells.
1004 | y_col : str
1005 | Name of column with y coordinates in df_cells.
1006 | df_grid : pandas.DataFrame
1007 | Dataframe with grid data.
1008 | annotation : str, optional
1009 | Name of the column with annotations in df_grid. Default is 'annotations'.
1010 | plot : bool, optional
1011 | If true, plots the coordinates of the grid space and the cell space to make sure
1012 | they are aligned. Default is True.
1013 |
1014 | Returns
1015 | -------
1016 | df_cells : pandas.DataFrame
1017 | Updated dataframe with cell data.
1018 | """
1019 |
1020 | print('make sure the coordinate systems are aligned e.g. axes are not flipped')
1021 | a = np.vstack([df_grid['x'], df_grid['y']])
1022 | b = np.vstack([df_cells[x_col], df_cells[y_col]])
1023 |
1024 | if plot:
1025 | plt.figure(dpi=100, figsize=[10, 10])
1026 | plt.title('cell space')
1027 | plt.plot(b[0], b[1], '.', markersize=1)
1028 | plt.show()
1029 |
1030 | df_grid_temp = df_grid.iloc[np.where(df_grid[annotation] != 'unassigned')[0], :].copy()
1031 | aa = np.vstack([df_grid_temp['x'], df_grid_temp['y']])
1032 | plt.figure(dpi=100, figsize=[10, 10])
1033 | plt.plot(aa[0], aa[1], '.', markersize=1)
1034 | plt.title('annotation space')
1035 | plt.show()
1036 |
1037 | annotations = df_grid.columns[~df_grid.columns.isin(['x', 'y'])]
1038 |
1039 | for k in annotations:
1040 | print('migrating - ' + k + ' to segmentations')
1041 | df_cells[k] = scipy.interpolate.griddata(points=a.T, values=df_grid[k], xi=b.T, method='nearest')
1042 |
1043 | return df_cells
1044 |
1045 |
1046 | def anno_to_visium_spots(df_spots, df_grid, ppm, plot=True,how='nearest',max_distance=10e10):
1047 | """
1048 | Maps tissue annotations to Visium spots according to the nearest neighbors.
1049 |
1050 | Parameters
1051 | ----------
1052 | df_spots : pandas.DataFrame
1053 | Dataframe with Visium spot data.
1054 | df_grid : pandas.DataFrame
1055 | Dataframe with grid data.
1056 | ppm : float
1057 | scale of annotation vs visium
1058 | plot : bool, optional
1059 | If true, plots the coordinates of the grid space and the spot space to make sure
1060 | they are aligned. Default is True.
1061 | how : string, optinal
1062 | This determines how the association between the 2 grids is made from the scipy.interpolate.griddata function. Default is 'nearest'
1063 | max_distance : int
1064 | maximal distance where points are not migrated
1065 |
1066 | Returns
1067 | -------
1068 | df_spots : pandas.DataFrame
1069 | Updated dataframe with Visium spot data.
1070 | """
1071 | import numpy as np
1072 | from scipy.interpolate import griddata
1073 | from scipy.spatial import cKDTree
1074 |
1075 | print('Make sure the coordinate systems are aligned, e.g., axes are not flipped.')
1076 | a = np.vstack([df_grid['x'], df_grid['y']])
1077 | b = np.vstack([df_spots['pxl_col_in_fullres'], df_spots['pxl_row_in_fullres']])*ppm
1078 |
1079 | if plot:
1080 | plt.figure(dpi=100, figsize=[10, 10])
1081 | plt.title('Spot space')
1082 | plt.plot(b[0], b[1], '.', markersize=1)
1083 | plt.show()
1084 |
1085 | plt.figure(dpi=100, figsize=[10, 10])
1086 | plt.plot(a[0], a[1], '.', markersize=1)
1087 | plt.title('Morpho space')
1088 | plt.show()
1089 |
1090 | annotations = df_grid.columns[~df_grid.columns.isin(['x', 'y'])].copy()
1091 |
1092 | for k in annotations:
1093 | print('Migrating - ' + k + ' to segmentations.')
1094 |
1095 | # Interpolation
1096 | df_spots[k] = griddata(points=a.T, values=df_grid[k], xi=b.T, method=how)
1097 |
1098 | # Create KDTree
1099 | tree = cKDTree(a.T)
1100 |
1101 | # Query tree for nearest distance
1102 | distances, _ = tree.query(b.T, distance_upper_bound=max_distance)
1103 | # Mask df_spots where the distance is too high
1104 | df_spots[k][distances==np.inf] = None
1105 | # df_spots[k] = scipy.interpolate.griddata(points=a.T, values=df_grid[k], xi=b.T, method=how)
1106 |
1107 | return df_spots
1108 |
1109 |
1110 | def plot_grid(df, annotation, spotsize=10, save=False, dpi=100, figsize=(5,5), savepath=None):
1111 | """
1112 | Plots a grid.
1113 |
1114 | Parameters
1115 | ----------
1116 | df : pandas.DataFrame
1117 | Dataframe containing data to be plotted.
1118 | annotation : str
1119 | Annotation to be used in the plot.
1120 | spotsize : int, optional
1121 | Size of the spots in the plot. Default is 10.
1122 | save : bool, optional
1123 | If true, saves the plot. Default is False.
1124 | dpi : int, optional
1125 | Dots per inch for the plot. Default is 100.
1126 | figsize : tuple, optional
1127 | Size of the figure. Default is (5,5).
1128 | savepath : str, optional
1129 | Path to save the plot. Default is None.
1130 |
1131 | Returns
1132 | -------
1133 | None
1134 | """
1135 |
1136 | plt.figure(dpi=dpi, figsize=figsize)
1137 |
1138 | ct_order = list((df[annotation].value_counts() > 0).keys())
1139 | ct_color_map = dict(zip(ct_order, np.array(sns.color_palette("colorblind", len(ct_order)))[range(len(ct_order))]))
1140 |
1141 | sns.scatterplot(x='x', y='y', hue=annotation, s=spotsize, data=df, palette=ct_color_map, hue_order=ct_order)
1142 |
1143 | plt.grid(False)
1144 | plt.title(annotation)
1145 | plt.axis('equal')
1146 |
1147 | if save:
1148 | if savepath is None:
1149 | raise ValueError('The savepath must be specified if save is True.')
1150 |
1151 | plt.savefig(savepath + '/' + annotation.replace(" ", "_") + '.pdf')
1152 |
1153 | plt.show()
1154 |
1155 |
1156 | def poly_annotator(imarray, annotation, anno_dict, plot_size=1024, use_datashader=False):
1157 | """
1158 | Interactive annotation tool with line annotations using Panel tabs for toggling between morphology and annotation.
1159 | The principle is that selecting closed/semiclosed shaped that will later be filled according to the proper annotation.
1160 |
1161 | Parameters
1162 | ----------
1163 | imarray : numpy.ndarray
1164 | Image in numpy array format.
1165 | annotation : numpy.ndarray
1166 | Label image in numpy array format.
1167 | anno_dict : dict
1168 | Dictionary of structures to annotate and colors for the structures.
1169 | plot_size: int, default=1024
1170 | Figure size for plotting.
1171 | use_datashader : Boolean, optional
1172 | If we should use datashader for rendering the image. Recommended for high resolution image. Default is False.
1173 |
1174 | Returns
1175 | -------
1176 | Panel Tabs object
1177 | A Tabs object containing the annotation and image panels.
1178 | dict
1179 | Dictionary containing the Bokeh renderers for the annotation lines.
1180 | """
1181 |
1182 | annotation_c = annotation.astype('uint8').copy()
1183 | annotation_c = np.flip(annotation_c, 0)
1184 |
1185 | imarray_c = imarray.astype('uint8').copy()
1186 | imarray_c = np.flip(imarray_c, 0)
1187 |
1188 | # Create new holoview images
1189 | anno = hv.RGB(annotation_c, bounds=(0, 0, annotation_c.shape[1], annotation_c.shape[0]))
1190 | if use_datashader:
1191 | anno = hd.regrid(anno)
1192 | ds_anno = anno.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
1193 |
1194 | img = hv.RGB(imarray_c, bounds=(0, 0, imarray_c.shape[1], imarray_c.shape[0]))
1195 | if use_datashader:
1196 | img = hd.regrid(img)
1197 | ds_img = img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
1198 |
1199 | anno_tab_plot_list = [ds_anno]
1200 | img_tab_plot_list = [ds_img]
1201 |
1202 | render_dict = {}
1203 | path_dict = {}
1204 | for key in anno_dict.keys():
1205 | path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=3, line_alpha=0.6)
1206 | render_dict[key] = CustomPolyDraw(source=path_dict[key], num_objects=300, tooltip=key,
1207 | icon_colour=anno_dict[key])
1208 |
1209 | anno_tab_plot_list.append(path_dict[key])
1210 |
1211 | # Create the tabbed view
1212 | p = pn.Tabs(("Annotation", pn.panel(hd.Overlay(anno_tab_plot_list).collate())),
1213 | ("Image", pn.panel(hd.Overlay(img_tab_plot_list).collate())), dynamic=False)
1214 | return p, render_dict
1215 |
1216 |
1217 | def object_annotator(imarray, result, anno_dict, render_dict, alpha):
1218 | """
1219 | Extracts annotations and labels them according to brush strokes while generating out_img and the label image corrected_labels, and the anno_dict object.
1220 |
1221 | Parameters
1222 | ----------
1223 | imarray : numpy.ndarray
1224 | Image in numpy array format.
1225 | result : numpy.ndarray
1226 | Label image in numpy array format.
1227 | anno_dict : dict
1228 | Dictionary of structures to annotate and colors for the structures.
1229 | render_dict : dict
1230 | Bokeh data container.
1231 | alpha : float
1232 | Blending factor.
1233 |
1234 | Returns
1235 | -------
1236 | numpy.ndarray
1237 | Corrected label image.
1238 | dict
1239 | Dictionary containing the object colors.
1240 | """
1241 |
1242 | colorpool = ['green', 'cyan', 'brown', 'magenta', 'blue', 'red', 'orange']
1243 |
1244 | result[:] = 1
1245 | corrected_labels = result.copy()
1246 | object_dict = {'unassigned': 'yellow'}
1247 |
1248 | for idx,a in enumerate(render_dict.keys()):
1249 | if render_dict[a].data['xs']:
1250 | print(a)
1251 | for o in range(len(render_dict[a].data['xs'])):
1252 | x = np.array(render_dict[a].data['xs'][o]).astype(int)
1253 | y = np.array(render_dict[a].data['ys'][o]).astype(int)
1254 | rr, cc = polygon(y, x)
1255 | inshape = (result.shape[0]>rr) & (0cc) & (0 S2 -> S3.
1774 |
1775 | Parameters:
1776 | -----------
1777 | df_ibex : DataFrame
1778 | Input DataFrame that contains the data.
1779 | anno : str, optional
1780 | Annotation column.
1781 | structure : list of str, optional
1782 | List of structures to be meausure. [S1, S2, S3]
1783 | w : list of float, optional
1784 | List of weights between the 2 components of the axis w[0] * S1->S2 and w[1] * S2->S3. Default is [0.2,0.8].
1785 | prefix : str, optional
1786 | Prefix for the column names in DataFrame. Default is 'L2_dist_'.
1787 | output_col : str, optional
1788 | Name of the output column.
1789 |
1790 | Returns:
1791 | --------
1792 | df : DataFrame
1793 | DataFrame with calculated new column.
1794 | """
1795 | df = df_ibex.copy()
1796 | a1 = (df[prefix + anno +'_'+ structure[0]] - df[prefix + anno +'_'+ structure[1]]) \
1797 | /(df[prefix + anno +'_'+ structure[0]] + df[prefix + anno +'_'+ structure[1]])
1798 |
1799 | a2 = (df[prefix + anno +'_'+ structure[1]] - df[prefix + anno +'_'+ structure[2]]) \
1800 | /(df[prefix + anno +'_'+ structure[1]] + df[prefix + anno +'_'+ structure[2]])
1801 | df[output_col] = w[0]*a1 + w[1]*a2
1802 |
1803 | return df
1804 |
1805 |
1806 | def calculate_axis_2p(df_ibex, anno, structure, output_col, prefix='L2_dist_'):
1807 | """
1808 | Function to calculate a unimodal nomralized axis based on ordered structure of S1 -> S2 .
1809 |
1810 | Parameters:
1811 | -----------
1812 | df_ibex : DataFrame
1813 | Input DataFrame that contains the data.
1814 | anno : str, optional
1815 | Annotation column.
1816 | structure : list of str, optional
1817 | List of structures to be meausure. [S1, S2]
1818 | prefix : str, optional
1819 | Prefix for the column names in DataFrame. Default is 'L2_dist_'.
1820 | output_col : str, optional
1821 | Name of the output column.
1822 |
1823 | Returns:
1824 | --------
1825 | df : DataFrame
1826 | DataFrame with calculated new column.
1827 | """
1828 | df = df_ibex.copy()
1829 | a1 = (df[prefix + anno +'_'+ structure[0]] - df[prefix + anno +'_'+ structure[1]]) \
1830 | /(df[prefix + anno +'_'+ structure[0]] + df[prefix + anno +'_'+ structure[1]])
1831 |
1832 | df[output_col] = a1
1833 |
1834 | return df
1835 |
1836 | def bin_axis(ct_order, cutoff_values, df, axis_anno_name):
1837 | """
1838 | Bins a column of a DataFrame based on cutoff values and assigns manual bin labels.
1839 |
1840 | Parameters:
1841 | ct_order (list): The order of manual bin labels.
1842 | cutoff_values (list): The cutoff values used for binning.
1843 | df (pandas.DataFrame): The DataFrame containing the column to be binned.
1844 | axis_anno_name (str): The name of the column to be binned.
1845 |
1846 | Returns:
1847 | pandas.DataFrame: The modified DataFrame with manual bin labels assigned.
1848 | """
1849 | # Manual annotations
1850 | df['manual_bin_' + axis_anno_name] = 'unassigned'
1851 | df['manual_bin_' + axis_anno_name] = df['manual_bin_' + axis_anno_name].astype('object')
1852 | df.loc[np.array(df[axis_anno_name] < cutoff_values[0]), 'manual_bin_' + axis_anno_name] = ct_order[0]
1853 | print(ct_order[0] + '= (' + str(cutoff_values[0]) + '>' + axis_anno_name + ')')
1854 |
1855 | for idx, r in enumerate(cutoff_values[:-1]):
1856 | df.loc[np.array(df[axis_anno_name] >= cutoff_values[idx]) & np.array(df[axis_anno_name] < cutoff_values[idx+1]),
1857 | 'manual_bin_' + axis_anno_name] = ct_order[idx+1]
1858 | print(ct_order[idx+1] + '= (' + str(cutoff_values[idx]) + '<=' + axis_anno_name + ') & (' + str(cutoff_values[idx+1]) + '>' + axis_anno_name + ')' )
1859 |
1860 | df.loc[np.array(df[axis_anno_name] >= cutoff_values[-1]), 'manual_bin_' + axis_anno_name] = ct_order[-1]
1861 | print(ct_order[-1] + '= (' + str(cutoff_values[-1]) + '=<' + axis_anno_name + ')')
1862 |
1863 | df['manual_bin_' + axis_anno_name] = df['manual_bin_' + axis_anno_name].astype('category')
1864 | df['manual_bin_' + axis_anno_name + '_int'] = df['manual_bin_' + axis_anno_name].cat.codes
1865 |
1866 | return df
1867 |
1868 |
1869 | def plot_cont(data, x_col='centroid-1', y_col='centroid-0', color_col='L2_dist_annotation_tissue_Edge',
1870 | cmap='jet', title='L2_dist_annotation_tissue_Edge', s=1, dpi=100, figsize=[10,10]):
1871 | plt.figure(dpi=dpi, figsize=figsize)
1872 |
1873 | # Create an axes instance for the scatter plot
1874 | ax = plt.subplot(111)
1875 |
1876 | # Create the scatterplot
1877 | scatter = sns.scatterplot(x=x_col, y=y_col, data=data,
1878 | c=data[color_col], cmap=cmap, s=s,
1879 | legend=False, ax=ax) # Use the created axes
1880 |
1881 | plt.grid(False)
1882 | plt.axis('equal')
1883 | plt.title(title)
1884 | for pos in ['right', 'top', 'bottom', 'left']:
1885 | ax.spines[pos].set_visible(False)
1886 |
1887 | # Add colorbar
1888 | norm = plt.Normalize(data[color_col].min(), data[color_col].max())
1889 | sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
1890 | sm.set_array([])
1891 | cbar = plt.colorbar(sm, ax=ax, label=title, aspect=30) # Use the created axes for the colorbar
1892 | cbar.ax.set_position([0.85, 0.25, 0.05, 0.5]) # adjust the position as needed
1893 |
1894 |
1895 | plt.show()
--------------------------------------------------------------------------------
/tissue_tag/__init__.py:
--------------------------------------------------------------------------------
1 | from .tissue_tag import *
2 |
--------------------------------------------------------------------------------
/tissue_tag/__pycache__/__init__.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/tissue_tag/__pycache__/__init__.cpython-39.pyc
--------------------------------------------------------------------------------
/tissue_tag/__pycache__/tissue_tag.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Teichlab/TissueTag/59ab4ccb771768c379397cd572caf094e898ef02/tissue_tag/__pycache__/tissue_tag.cpython-39.pyc
--------------------------------------------------------------------------------
/tissue_tag/tissue_tag.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import bokeh
3 | import json
4 | import matplotlib
5 | import matplotlib.font_manager as fm
6 | import matplotlib.pyplot as plt
7 | import numpy as np
8 | import pandas as pd
9 | import pickle
10 | import random
11 | import scipy
12 | import seaborn as sns
13 | import skimage
14 | import warnings
15 | import os
16 | import holoviews as hv
17 | import holoviews.operation.datashader as hd
18 | import panel as pn
19 | from bokeh.models import FreehandDrawTool, PolyDrawTool, PolyEditTool,TabPanel, Tabs, UndoTool
20 | from bokeh.plotting import figure, show
21 | from functools import partial
22 | from io import BytesIO
23 | from packaging import version
24 | from PIL import Image, ImageColor, ImageDraw, ImageEnhance, ImageFilter, ImageFont
25 | from scipy import interpolate
26 | from scipy.spatial import distance
27 | from skimage import data, feature, future, segmentation
28 | from skimage.draw import polygon, disk
29 | from sklearn.ensemble import RandomForestClassifier
30 | from tqdm import tqdm
31 | import numpy as np
32 | import pandas as pd
33 | import skimage.transform
34 | import skimage.draw
35 | import scipy.ndimage
36 | hv.extension('bokeh')
37 |
38 | try:
39 | import scanpy as scread_visium
40 | except ImportError:
41 | print('scanpy is not available')
42 |
43 | Image.MAX_IMAGE_PIXELS = None
44 |
45 | font_path = fm.findfont('DejaVu Sans')
46 |
47 | class CustomFreehandDraw(hv.streams.FreehandDraw):
48 | """
49 | This custom class adds the ability to customise the icon for the FreeHandDraw tool.
50 | """
51 |
52 | def __init__(self, empty_value=None, num_objects=0, styles=None, tooltip=None, icon_colour="black", **params):
53 | self.icon_colour = icon_colour
54 | super().__init__(empty_value, num_objects, styles, tooltip, **params)
55 |
56 | # Logic to update plot without last annotation
57 | class CustomFreehandDrawCallback(hv.plotting.bokeh.callbacks.PolyDrawCallback):
58 | """
59 | This custom class is the corresponding callback for the CustomFreeHandDraw which will render a custom icon for
60 | the FreeHandDraw tool.
61 | """
62 |
63 | def initialize(self, plot_id=None):
64 | plot = self.plot
65 | cds = plot.handles['cds']
66 | glyph = plot.handles['glyph']
67 | stream = self.streams[0]
68 | if stream.styles:
69 | self._create_style_callback(cds, glyph)
70 | kwargs = {}
71 | if stream.tooltip:
72 | kwargs['description'] = stream.tooltip
73 | if stream.empty_value is not None:
74 | kwargs['empty_value'] = stream.empty_value
75 | kwargs['icon'] = create_icon(stream.tooltip[0], stream.icon_colour)
76 | poly_tool = FreehandDrawTool(
77 | num_objects=stream.num_objects,
78 | renderers=[plot.handles['glyph_renderer']],
79 | **kwargs
80 | )
81 | plot.state.tools.append(poly_tool)
82 | self._update_cds_vdims(cds.data)
83 | hv.plotting.bokeh.callbacks.CDSCallback.initialize(self, plot_id)
84 |
85 |
86 | class CustomPolyDraw(hv.streams.PolyDraw):
87 | """
88 | Attaches a FreehandDrawTool and syncs the datasource.
89 | """
90 |
91 | def __init__(self, empty_value=None, drag=True, num_objects=0, show_vertices=False, vertex_style={}, styles={},
92 | tooltip=None, icon_colour="black", **params):
93 | self.icon_colour = icon_colour
94 | super().__init__(empty_value, drag, num_objects, show_vertices, vertex_style, styles, tooltip, **params)
95 |
96 | class CustomPolyDrawCallback(hv.plotting.bokeh.callbacks.GlyphDrawCallback):
97 |
98 | def initialize(self, plot_id=None):
99 | plot = self.plot
100 | stream = self.streams[0]
101 | cds = self.plot.handles['cds']
102 | glyph = self.plot.handles['glyph']
103 | renderers = [plot.handles['glyph_renderer']]
104 | kwargs = {}
105 | if stream.num_objects:
106 | kwargs['num_objects'] = stream.num_objects
107 | if stream.show_vertices:
108 | vertex_style = dict({'size': 10}, **stream.vertex_style)
109 | r1 = plot.state.scatter([], [], **vertex_style)
110 | kwargs['vertex_renderer'] = r1
111 | if stream.styles:
112 | self._create_style_callback(cds, glyph)
113 | if stream.tooltip:
114 | kwargs['description'] = stream.tooltip
115 | if stream.empty_value is not None:
116 | kwargs['empty_value'] = stream.empty_value
117 | kwargs['icon'] = create_icon(stream.tooltip[0], stream.icon_colour)
118 | poly_tool = PolyDrawTool(
119 | drag=all(s.drag for s in self.streams), renderers=renderers,
120 | **kwargs
121 | )
122 | plot.state.tools.append(poly_tool)
123 | self._update_cds_vdims(cds.data)
124 | super().initialize(plot_id)
125 |
126 | # Overload the callback from holoviews to use the custom FreeHandDrawCallback class. Probably not safe.
127 | hv.plotting.bokeh.callbacks.Stream._callbacks['bokeh'].update({
128 | CustomFreehandDraw: CustomFreehandDrawCallback,
129 | CustomPolyDraw: CustomPolyDrawCallback
130 | })
131 |
132 | class SynchronisedFreehandDrawLink(hv.plotting.links.Link):
133 | """
134 | This custom class is a helper designed for creating synchronised FreehandDraw tools.
135 | """
136 |
137 | _requires_target = True
138 |
139 | class SynchronisedFreehandDrawCallback(hv.plotting.bokeh.LinkCallback):
140 | """
141 | This custom class implements the method to synchronise data between two FreehandDraw tools by manually updating
142 | the data_source of the linked tools.
143 | """
144 |
145 | source_model = "cds"
146 | source_handles = ["plot"]
147 | target_model = "cds"
148 | target_handles = ["plot"]
149 | on_source_changes = ["data"]
150 | on_target_changes = ["data"]
151 |
152 | source_code = """
153 | target_cds.data = source_cds.data
154 | target_cds.change.emit()
155 | """
156 |
157 | target_code = """
158 | source_cds.data = target_cds.data
159 | source_cds.change.emit()
160 | """
161 |
162 | # Register the callback class to the link class
163 | SynchronisedFreehandDrawLink.register_callback('bokeh', SynchronisedFreehandDrawCallback)
164 |
165 | def to_base64(img):
166 | buffered = BytesIO()
167 | img.save(buffered, format="png")
168 | data = base64.b64encode(buffered.getvalue()).decode('utf-8')
169 | return f'data:image/png;base64,{data}'
170 |
171 | def create_icon(name, color):
172 | font_size = 25
173 | img = Image.new('RGBA', (30, 30), (255, 0, 0, 0))
174 | ImageDraw.Draw(img).text((5, 2), name, fill=tuple((np.array(matplotlib.colors.to_rgb(color)) * 255).astype(int)),
175 | font=ImageFont.truetype(font_path, font_size))
176 | if version.parse(bokeh.__version__) < version.parse("3.1.0"):
177 | img = to_base64(img)
178 | return img
179 |
180 | def read_image(
181 | path,
182 | ppm_image=None,
183 | ppm_out=1,
184 | contrast_factor=1,
185 | background_image_path=None,
186 | plot=True,
187 | ):
188 | """
189 | Reads an H&E or fluorescent image and returns the image with optional enhancements.
190 |
191 | Parameters
192 | ----------
193 | path : str
194 | Path to the image. The image must be in a format supported by Pillow. Refer to
195 | https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html for the list
196 | of supported formats.
197 | ppm_image : float, optional
198 | Pixels per microns of the input image. If not provided, this will be extracted from the image
199 | metadata with info['resolution']. If the metadata is not present, an error will be thrown.
200 | ppm_out : float, optional
201 | Pixels per microns of the output image. Defaults to 1.
202 | contrast_factor : int, optional
203 | Factor to adjust contrast for output image, typically between 2-5. Defaults to 1.
204 | background_image_path : str, optional
205 | Path to a background image. If provided, this image and the input image are combined
206 | to create a virtual H&E (vH&E). If not provided, vH&E will not be performed.
207 | plot : boolean, optional
208 | if to plot the loaded image. default is True.
209 |
210 | Returns
211 | -------
212 | numpy.ndarray
213 | The processed image.
214 | float
215 | The pixels per microns of the input image.
216 | float
217 | The pixels per microns of the output image.
218 |
219 | Raises
220 | ------
221 | ValueError
222 | If 'ppm_image' is not provided and cannot be extracted from the image metadata.
223 | """
224 |
225 | im = Image.open(path)
226 | if not(ppm_image):
227 | try:
228 | ppm_image = im.info['resolution'][0]
229 | print('found ppm in image metadata!, its - '+str(ppm_image))
230 | except:
231 | print('could not find ppm in image metadata, please provide ppm value')
232 | width, height = im.size
233 | newsize = (int(width/ppm_image*ppm_out), int(height/ppm_image*ppm_out))
234 | # resize
235 | im = im.resize(newsize,Image.Resampling.LANCZOS)
236 | im = im.convert("RGBA")
237 | #increase contrast
238 | enhancer = ImageEnhance.Contrast(im)
239 | factor = contrast_factor
240 | im = enhancer.enhance(factor*factor)
241 |
242 | if background_image_path:
243 | im2 = Image.open(background_image_path)
244 | # resize
245 | im2 = im2.resize(newsize,Image.Resampling.LANCZOS)
246 | im2 = im2.convert("RGBA")
247 | #increase contrast
248 | enhancer = ImageEnhance.Contrast(im2)
249 | factor = contrast_factor
250 | im2 = enhancer.enhance(factor*factor)
251 | # virtual H&E
252 | # im2 = im2.convert("RGBA")
253 | im = simonson_vHE(np.array(im).astype('uint8'),np.array(im2).astype('uint8'))
254 | if plot:
255 | plt.figure(dpi=100)
256 | plt.imshow(im,origin='lower')
257 | plt.show()
258 |
259 | return np.array(im),ppm_image,ppm_out
260 |
261 | def read_visium(
262 | spaceranger_dir_path,
263 | use_resolution='hires',
264 | res_in_ppm = None,
265 | fullres_path = None,
266 | header = None,
267 | plot = True,
268 | in_tissue = True,
269 | ):
270 | """
271 | Reads 10X Visium image data from SpaceRanger (v1.3.0).
272 |
273 | Parameters
274 | ----------
275 | spaceranger_dir_path : str
276 | Path to the 10X SpaceRanger output folder.
277 | use_resolution : {'hires', 'lowres', 'fullres'}, optional
278 | Desired image resolution. 'fullres' refers to the original image that was sent to SpaceRanger
279 | along with sequencing data. If 'fullres' is specified, `fullres_path` must also be provided.
280 | Defaults to 'hires'.
281 | res_in_ppm : float, optional
282 | Used when working with full resolution images to resize the full image to a specified pixels per
283 | microns.
284 | fullres_path : str, optional
285 | Path to the full resolution image used for mapping. This must be specified if `use_resolution` is
286 | set to 'fullres'.
287 | header : int, optional (defa
288 | newer SpaceRanger could need this to be set as 0. Default is None.
289 | plot : Boolean
290 | if to plot the visium object to scale
291 | in_tissue : Boolean
292 | if to take only tissue spots or all visium spots
293 |
294 | Returns
295 | -------
296 | numpy.ndarray
297 | The processed image.
298 | float
299 | The pixels per microns of the image.
300 | pandas.DataFrame
301 | A DataFrame containing information on the tissue positions.
302 |
303 | Raises
304 | ------
305 | AssertionError
306 | If 'use_resolution' is set to 'fullres' but 'fullres_path' is not specified.
307 | """
308 | plt.figure(figsize=[12,12])
309 |
310 | spotsize = 55 #um spot size of a visium spot
311 |
312 | scalef = json.load(open(spaceranger_dir_path+'spatial/scalefactors_json.json','r'))
313 | if use_resolution=='fullres':
314 | assert fullres_path is not None, 'if use_resolution=="fullres" fullres_path has to be specified'
315 |
316 | df = pd.read_csv(spaceranger_dir_path+'spatial/tissue_positions_list.csv',header=header)
317 | if header==0:
318 | df = df.set_index(keys='barcode')
319 | if in_tissue:
320 | df = df[df['in_tissue']>0] # in tissue
321 | # turn df to mu
322 | fullres_ppm = scalef['spot_diameter_fullres']/spotsize
323 | df['pxl_row_in_fullres'] = df['pxl_row_in_fullres']/fullres_ppm
324 | df['pxl_col_in_fullres'] = df['pxl_col_in_fullres']/fullres_ppm
325 | else:
326 | df = df.set_index(keys=0)
327 | if in_tissue:
328 | df = df[df[1]>0] # in tissue
329 | # turn df to mu
330 | fullres_ppm = scalef['spot_diameter_fullres']/spotsize
331 | df['pxl_row_in_fullres'] = df[4]/fullres_ppm
332 | df['pxl_col_in_fullres'] = df[5]/fullres_ppm
333 |
334 |
335 | if use_resolution=='fullres':
336 | im = Image.open(fullres_path)
337 | ppm = fullres_ppm
338 | else:
339 | im = Image.open(spaceranger_dir_path+'spatial/tissue_'+use_resolution+'_image.png')
340 | ppm = scalef['spot_diameter_fullres']*scalef['tissue_'+use_resolution+'_scalef']/spotsize
341 |
342 |
343 | if res_in_ppm:
344 | width, height = im.size
345 | newsize = (int(width*res_in_ppm/ppm), int(height*res_in_ppm/ppm))
346 | im = im.resize(newsize,Image.Resampling.LANCZOS)
347 | ppm = res_in_ppm
348 |
349 | # translate from mu to pixel
350 | df['pxl_col'] = df['pxl_col_in_fullres']*ppm
351 | df['pxl_row'] = df['pxl_row_in_fullres']*ppm
352 |
353 |
354 | im = im.convert("RGBA")
355 |
356 | if plot:
357 | coordinates = np.vstack((df['pxl_col'],df['pxl_row']))
358 | plt.imshow(im,origin='lower')
359 | plt.plot(coordinates[0,:],coordinates[1,:],'.')
360 | plt.title( 'ppm - '+str(ppm))
361 |
362 | return np.array(im), ppm, df
363 |
364 |
365 | def scribbler(imarray, anno_dict, plot_size=1024, use_datashader=False):
366 | """
367 | Creates interactive scribble line annotations with Holoviews.
368 |
369 | Parameters
370 | ----------
371 | imarray : np.array
372 | Image in numpy array format.
373 | anno_dict : dict
374 | Dictionary of structures to annotate and colors for the structures.
375 | plot_size : int, optional
376 | Used to adjust the plotting area size. Default is 1024.
377 | use_datashader : Boolean, optional
378 | If we should use datashader for rendering the image. Recommended for high resolution image. Default is False.
379 |
380 | Returns
381 | -------
382 | holoviews.core.overlay.Overlay
383 | A Holoviews overlay composed of the image and annotation layers.
384 | dict
385 | Dictionary of renderers for each annotation.
386 | """
387 |
388 | imarray_c = imarray.astype('uint8').copy()
389 | imarray_c = np.flip(imarray_c, 0)
390 |
391 | # Create a new holoview image
392 | img = hv.RGB(imarray_c, bounds=(0, 0, imarray_c.shape[1], imarray_c.shape[0]))
393 | if use_datashader:
394 | img = hd.regrid(img)
395 | ds_img = img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
396 |
397 | # Create the plot list object
398 | plot_list = [ds_img]
399 |
400 | # Render plot using bokeh backend
401 | p = hv.render(img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size)),
402 | backend="bokeh")
403 |
404 | render_dict = {}
405 | path_dict = {}
406 | for key in anno_dict.keys():
407 | path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
408 | render_dict[key] = CustomFreehandDraw(source=path_dict[key], num_objects=200, tooltip=key,
409 | icon_colour=anno_dict[key])
410 | plot_list.append(path_dict[key])
411 |
412 | # Create the plot from the plot list
413 | p = hd.Overlay(plot_list).collate()
414 |
415 | return p, render_dict
416 |
417 |
418 | def annotator(imarray, labels, anno_dict, plot_size=1024,invert_y=False,use_datashader=False,alpha=0.7):
419 | """
420 | Interactive annotation tool with line annotations using Panel tabs for toggling between morphology and annotation.
421 |
422 | Parameters
423 | ----------
424 | imarray: np.array
425 | Image in numpy array format.
426 | labels: np.array
427 | Label image in numpy array format.
428 | anno_dict: dict
429 | Dictionary of structures to annotate and colors for the structures.
430 | plot_size: int, default=1024
431 | Figure size for plotting.
432 | invert_y :boolean
433 | invert plot along y axis
434 | use_datashader : Boolean, optional
435 | If we should use datashader for rendering the image. Recommended for high resolution image. Default is False.
436 | alpha
437 | blending extent of "Annotation" tab
438 |
439 | Returns
440 | -------
441 | Panel Tabs object
442 | A Tabs object containing the annotation and image panels.
443 | dict
444 | Dictionary of Bokeh renderers for each annotation.
445 | """
446 | import logging
447 | logging.getLogger('bokeh.core.validation.check').setLevel(logging.ERROR)
448 |
449 | # convert label image to rgb for annotation
450 | labels_rgb = rgb_from_labels(labels, colors=list(anno_dict.values()))
451 | annotation = overlay_labels(imarray,labels_rgb,alpha=alpha,show=False)
452 |
453 | annotation_c = annotation.astype('uint8').copy()
454 | if not invert_y:
455 | annotation_c = np.flip(annotation_c, 0)
456 |
457 | imarray_c = imarray.astype('uint8').copy()
458 | if not invert_y:
459 | imarray_c = np.flip(imarray_c, 0)
460 |
461 | # Create new holoview images
462 | anno = hv.RGB(annotation_c, bounds=(0, 0, annotation_c.shape[1], annotation_c.shape[0]))
463 | if use_datashader:
464 | anno = hd.regrid(anno)
465 | ds_anno = anno.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
466 |
467 | img = hv.RGB(imarray_c, bounds=(0, 0, imarray_c.shape[1], imarray_c.shape[0]))
468 | if use_datashader:
469 | img = hd.regrid(img)
470 | ds_img = img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
471 |
472 | anno_tab_plot_list = [ds_anno]
473 | img_tab_plot_list = [ds_img]
474 |
475 | render_dict = {}
476 | img_render_dict = {}
477 | path_dict = {}
478 | img_path_dict = {}
479 | for key in anno_dict.keys():
480 | path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
481 | render_dict[key] = CustomFreehandDraw(source=path_dict[key], num_objects=200, tooltip=key,
482 | icon_colour=anno_dict[key])
483 |
484 | img_path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
485 | img_render_dict[key] = CustomFreehandDraw(source=img_path_dict[key], num_objects=200, tooltip=key,
486 | icon_colour=anno_dict[key])
487 |
488 | SynchronisedFreehandDrawLink(path_dict[key], img_path_dict[key])
489 | anno_tab_plot_list.append(path_dict[key])
490 | img_tab_plot_list.append(img_path_dict[key])
491 |
492 | # Create the tabbed view
493 | p = pn.Tabs(("Annotation", pn.panel(hd.Overlay(anno_tab_plot_list).collate())),
494 | ("Image", pn.panel(hd.Overlay(img_tab_plot_list).collate())), dynamic=False)
495 | return p, render_dict
496 |
497 |
498 |
499 | def complete_pixel_gaps(x,y):
500 | """
501 | Function to complete pixel gaps in a given x, y coordinates
502 |
503 | Parameters:
504 | x : list
505 | list of x coordinates
506 | y : list
507 | list of y coordinates
508 |
509 | Returns:
510 | new_x, new_y : tuple
511 | tuple of completed x and y coordinates
512 | """
513 |
514 | new_x_1 = []
515 | new_x_2 = []
516 | # iterate over x coordinate values
517 | for idx, px in enumerate(x[:-1]):
518 | # interpolate between each pair of x points
519 | interpolation = interpolate.interp1d(x[idx:idx+2], y[idx:idx+2])
520 | interpolated_x_1 = np.linspace(x[idx], x[idx+1], num=np.abs(x[idx+1] - x[idx] + 1))
521 | interpolated_x_2 = interpolation(interpolated_x_1).astype(int)
522 | # add interpolated values to new x lists
523 | new_x_1 += list(interpolated_x_1)
524 | new_x_2 += list(interpolated_x_2)
525 |
526 | new_y_1 = []
527 | new_y_2 = []
528 | # iterate over y coordinate values
529 | for idx, py in enumerate(y[:-1]):
530 | # interpolate between each pair of y points
531 | interpolation = interpolate.interp1d(y[idx:idx+2], x[idx:idx+2])
532 | interpolated_y_1 = np.linspace(y[idx], y[idx+1], num=np.abs(y[idx+1] - y[idx] + 1))
533 | interpolated_y_2 = interpolation(interpolated_y_1).astype(int)
534 | # add interpolated values to new y lists
535 | new_y_1 += list(interpolated_y_1)
536 | new_y_2 += list(interpolated_y_2)
537 |
538 | # combine x and y lists
539 | new_x = new_x_1 + new_y_2
540 | new_y = new_x_2 + new_y_1
541 |
542 | return new_x, new_y
543 |
544 |
545 |
546 | def scribble_to_labels(imarray, render_dict, line_width=10):
547 | """
548 | Extract scribbles to a label image.
549 |
550 | Parameters
551 | ----------
552 | imarray: np.array
553 | Image in numpy array format used to calculate the label image size.
554 | render_dict: dict
555 | Bokeh object carrying annotations.
556 | line_width: int
557 | Width of the line labels.
558 |
559 | Returns
560 | -------
561 | np.array
562 | Annotation image.
563 | """
564 | annotations = {}
565 | training_labels = np.zeros((imarray.shape[1], imarray.shape[0]), dtype=np.uint8)
566 |
567 | for idx, a in enumerate(render_dict.keys()):
568 | xs = []
569 | ys = []
570 | annotations[a] = []
571 |
572 | for o in range(len(render_dict[a].data['xs'])):
573 | xt, yt = complete_pixel_gaps(
574 | np.array(render_dict[a].data['xs'][o]).astype(int),
575 | np.array(render_dict[a].data['ys'][o]).astype(int)
576 | )
577 | xs.extend(xt)
578 | ys.extend(yt)
579 | annotations[a].append(np.vstack([
580 | np.array(render_dict[a].data['xs'][o]).astype(int),
581 | np.array(render_dict[a].data['ys'][o]).astype(int)
582 | ]))
583 |
584 | xs = np.array(xs)
585 | ys = np.array(ys)
586 | inshape = (xs > 0) & (xs < imarray.shape[1]) & (ys > 0) & (ys < imarray.shape[0])
587 | xs = xs[inshape]
588 | ys = ys[inshape]
589 |
590 | training_labels[np.floor(xs).astype(int), np.floor(ys).astype(int)] = idx + 1
591 |
592 | training_labels = training_labels.transpose()
593 | return skimage.segmentation.expand_labels(training_labels, distance=line_width / 2)
594 |
595 |
596 | def rgb_from_labels(labelimage, colors):
597 | """
598 | Helper function to plot from label images.
599 |
600 | Parameters
601 | ----------
602 | labelimage: np.array
603 | Label image with pixel values corresponding to labels.
604 | colors: list
605 | Colors corresponding to pixel values for plotting.
606 |
607 | Returns
608 | -------
609 | np.array
610 | Annotation image.
611 | """
612 | labelimage_rgb = np.zeros((labelimage.shape[0], labelimage.shape[1], 4))
613 |
614 | for c in range(len(colors)):
615 | color = ImageColor.getcolor(colors[c], "RGB")
616 | labelimage_rgb[labelimage == c + 1, 0:3] = np.array(color)
617 |
618 | labelimage_rgb[:, :, 3] = 255
619 | return labelimage_rgb.astype('uint8')
620 |
621 |
622 |
623 | def sk_rf_classifier(im, training_labels,anno_dict,plot=True):
624 | """
625 | A simple random forest pixel classifier from sklearn.
626 |
627 | Parameters
628 | ----------
629 | im : array
630 | The actual image to predict the labels from, should be the same size as training_labels.
631 | training_labels : array
632 | Label image with pixel values corresponding to labels.
633 | anno_dict: dict
634 | Dictionary of structures to annotate and colors for the structures.
635 | plot : boolean, optional
636 | if to plot the loaded image. default is True.
637 |
638 | Returns
639 | -------
640 | array
641 | Predicted label map.
642 | """
643 |
644 | sigma_min = 1
645 | sigma_max = 16
646 | features_func = partial(feature.multiscale_basic_features,
647 | intensity=True, edges=False, texture=~True,
648 | sigma_min=sigma_min, sigma_max=sigma_max, channel_axis=-1)
649 |
650 | features = features_func(im)
651 | clf = RandomForestClassifier(n_estimators=50, n_jobs=-1,
652 | max_depth=10, max_samples=0.05)
653 | clf = future.fit_segmenter(training_labels, features, clf)
654 |
655 | labels = future.predict_segmenter(features, clf)
656 |
657 | if plot:
658 | labels_rgb = rgb_from_labels(labels,colors=list(anno_dict.values()))
659 | overlay_labels(im,labels_rgb,alpha=0.7)
660 |
661 | return labels
662 |
663 | def overlay_labels(im1, im2, alpha=0.8, show=True):
664 | """
665 | Helper function to merge 2 images.
666 |
667 | Parameters
668 | ----------
669 | im1 : array
670 | 1st image.
671 | im2 : array
672 | 2nd image.
673 | alpha : float, optional
674 | Blending factor, by default 0.8.
675 | show : bool, optional
676 | If to show the merged plot or not, by default True.
677 |
678 | Returns
679 | -------
680 | array
681 | The merged image.
682 | """
683 |
684 | #generate overlay image
685 | plt.rcParams["figure.figsize"] = [10, 10]
686 | plt.rcParams["figure.dpi"] = 100
687 | out_img = np.zeros(im1.shape,dtype=im1.dtype)
688 | out_img[:,:,:] = (alpha * im1[:,:,:]) + ((1-alpha) * im2[:,:,:])
689 | out_img[:,:,3] = 255
690 | if show:
691 | plt.imshow(out_img,origin='lower')
692 | return out_img
693 |
694 | def update_annotator(imarray, labels, anno_dict, render_dict, alpha=0.5,plot=True):
695 | """
696 | Updates annotations and generates overlay (out_img) and the label image (corrected_labels).
697 |
698 | Parameters
699 | ----------
700 | imarray : numpy.ndarray
701 | Image in numpy array format.
702 | labels : numpy.ndarray
703 | Label image in numpy array format.
704 | anno_dict : dict
705 | Dictionary of structures to annotate and colors for the structures.
706 | render_dict : dict
707 | Bokeh data container.
708 | alpha : float
709 | Blending factor.
710 | plot
711 | if the plot the updated annotations. default is True.
712 |
713 | Returns
714 | -------
715 | Returns corrected labels as numpy array.
716 | """
717 |
718 | corrected_labels = labels.copy()
719 | for idx, a in enumerate(render_dict.keys()):
720 | if render_dict[a].data['xs']:
721 | print(a)
722 | for o in range(len(render_dict[a].data['xs'])):
723 | x = np.array(render_dict[a].data['xs'][o]).astype(int)
724 | y = np.array(render_dict[a].data['ys'][o]).astype(int)
725 | rr, cc = polygon(y, x)
726 | inshape = np.where(np.array(labels.shape[0] > rr) & np.array(0 < rr) & np.array(labels.shape[1] > cc) & np.array(0 < cc))[0]
727 | corrected_labels[rr[inshape], cc[inshape]] = idx + 1
728 | if plot:
729 | rgb = rgb_from_labels(corrected_labels, list(anno_dict.values()))
730 | out_img = overlay_labels(imarray, rgb, alpha=alpha)
731 |
732 | return corrected_labels
733 |
734 |
735 | def rescale_image(label_image, target_size):
736 | """
737 | Rescales label image to original image size.
738 |
739 | Parameters
740 | ----------
741 | label_image : numpy.ndarray
742 | Labeled image.
743 | target_size : tuple
744 | Final dimensions.
745 |
746 | Returns
747 | -------
748 | numpy.ndarray
749 | Rescaled image.
750 | """
751 | imP = Image.fromarray(label_image)
752 | newsize = (target_size[0], target_size[1])
753 | return np.array(imP.resize(newsize))
754 |
755 |
756 | def save_annotation(folder, label_image, file_name, anno_names, anno_colors, ppm):
757 | """
758 | Saves the annotated image as .tif and in addition saves the translation
759 | from annotations to labels in a pickle file.
760 |
761 | Parameters
762 | ----------
763 | folder : str
764 | Folder where to save the annotations.
765 | label_image : numpy.ndarray
766 | Labeled image.
767 | file_name : str
768 | Name for tif image and pickle.
769 | anno_names : list
770 | Names of annotated objects.
771 | anno_colors : list
772 | Colors of annotated objects.
773 | ppm : float
774 | Pixels per microns.
775 | """
776 |
777 | label_image = Image.fromarray(label_image)
778 | label_image.save(folder + file_name + '.tif')
779 | with open(folder + file_name + '.pickle', 'wb') as handle:
780 | pickle.dump(dict(zip(range(1, len(anno_names) + 1), anno_names)), handle, protocol=pickle.HIGHEST_PROTOCOL)
781 | with open(folder + file_name + '_colors.pickle', 'wb') as handle:
782 | pickle.dump(dict(zip(anno_names, anno_colors)), handle, protocol=pickle.HIGHEST_PROTOCOL)
783 | with open(folder + file_name + '_ppm.pickle', 'wb') as handle:
784 | pickle.dump({'ppm': ppm}, handle, protocol=pickle.HIGHEST_PROTOCOL)
785 |
786 |
787 | def load_annotation(folder, file_name, load_colors=False):
788 | """
789 | Loads the annotated image from a .tif file and the translation from annotations
790 | to labels from a pickle file.
791 |
792 | Parameters
793 | ----------
794 | folder : str
795 | Folder path for annotations.
796 | file_name : str
797 | Name for tif image and pickle without extensions.
798 | load_colors : bool, optional
799 | If True, get original colors used for annotations. Default is False.
800 |
801 | Returns
802 | -------
803 | tuple
804 | Returns annotation image, annotation order, pixels per microns, and annotation color.
805 | If `load_colors` is False, annotation color is not returned.
806 | """
807 |
808 | imP = Image.open(folder + file_name + '.tif')
809 |
810 | ppm = imP.info['resolution'][0]
811 | im = np.array(imP)
812 |
813 | print(f'loaded annotation image - {file_name} size - {str(im.shape)}')
814 | with open(folder + file_name + '.pickle', 'rb') as handle:
815 | anno_order = pickle.load(handle)
816 | print('loaded annotations')
817 | print(anno_order)
818 | with open(folder + file_name + '_ppm.pickle', 'rb') as handle:
819 | ppm = pickle.load(handle)
820 | print('loaded ppm')
821 | print(ppm)
822 |
823 | if load_colors:
824 | with open(folder + file_name + '_colors.pickle', 'rb') as handle:
825 | anno_color = pickle.load(handle)
826 | print('loaded color annotations')
827 | print(anno_color)
828 | return im, anno_order, ppm['ppm'], anno_color
829 |
830 | else:
831 | return im, anno_order, ppm['ppm']
832 |
833 |
834 | def simonson_vHE(dapi_image, eosin_image):
835 | """
836 | Create virtual H&E images using DAPI and eosin images.
837 | from the developer website:
838 | The method is applied to data on a multiplex/Keyence microscope to produce virtual H&E images
839 | using fluorescence data. If you find it useful, consider citing the relevant article:
840 | Creating virtual H&E images using samples imaged on a commercial multiplex platform
841 | Paul D. Simonson, Xiaobing Ren, Jonathan R. Fromm
842 | doi: https://doi.org/10.1101/2021.02.05.21249150
843 |
844 | Parameters
845 | ----------
846 | dapi_image : ndarray
847 | DAPI image data.
848 | eosin_image : ndarray
849 | Eosin image data.
850 |
851 | Returns
852 | -------
853 | ndarray
854 | Virtual H&E image.
855 | """
856 |
857 | def createVirtualHE(dapi_image, eosin_image, k1, k2, background, beta_DAPI, beta_eosin):
858 | new_image = np.empty([dapi_image.shape[0], dapi_image.shape[1], 4])
859 | new_image[:,:,0] = background[0] + (1 - background[0]) * np.exp(- k1 * beta_DAPI[0] * dapi_image - k2 * beta_eosin[0] * eosin_image)
860 | new_image[:,:,1] = background[1] + (1 - background[1]) * np.exp(- k1 * beta_DAPI[1] * dapi_image - k2 * beta_eosin[1] * eosin_image)
861 | new_image[:,:,2] = background[2] + (1 - background[2]) * np.exp(- k1 * beta_DAPI[2] * dapi_image - k2 * beta_eosin[2] * eosin_image)
862 | new_image[:,:,3] = 1
863 | new_image = new_image*255
864 | return new_image.astype('uint8')
865 |
866 | k1 = k2 = 0.001
867 |
868 | background = [0.25, 0.25, 0.25]
869 |
870 | beta_DAPI = [9.147, 6.9215, 1.0]
871 |
872 | beta_eosin = [0.1, 15.8, 0.3]
873 |
874 | dapi_image = dapi_image[:,:,0]+dapi_image[:,:,1]
875 | eosin_image = eosin_image[:,:,0]+eosin_image[:,:,1]
876 |
877 | print(dapi_image.shape)
878 | return createVirtualHE(dapi_image, eosin_image, k1, k2, background, beta_DAPI, beta_eosin)
879 |
880 |
881 | def generate_hires_grid(im, spot_to_spot, pixels_per_micron):
882 | """
883 | Creates a hexagonal grid of a specified size and density.
884 |
885 | Parameters
886 | ----------
887 | im : array-like
888 | Image to fit the grid on (mostly for dimensions).
889 | spot_to_spot : float
890 | determines the grid density.
891 | pixels_per_micron : float
892 | The resolution of the image in pixels per micron.
893 | """
894 | # Step size in pixels for spot_to_spot microns
895 | step_size_in_pixels = spot_to_spot * pixels_per_micron
896 |
897 | # Generate X-axis and Y-axis grid points
898 | X1 = np.arange(step_size_in_pixels, im.shape[1] - 2 * step_size_in_pixels, step_size_in_pixels * np.sqrt(3)/2)
899 | Y1 = np.arange(step_size_in_pixels, im.shape[0] - step_size_in_pixels, step_size_in_pixels)
900 |
901 | # Shift every other column by half a step size (for staggered pattern in columns)
902 | positions = []
903 | for i, x in enumerate(X1):
904 | if i % 2 == 0: # Even columns (no shift)
905 | Y_shifted = Y1
906 | else: # Odd columns (shifted by half)
907 | Y_shifted = Y1 + step_size_in_pixels / 2
908 |
909 | # Combine X and Y positions, and check for boundary conditions
910 | for y in Y_shifted:
911 | if 0 <= x < im.shape[1] and 0 <= y < im.shape[0]:
912 | positions.append([x, y])
913 |
914 | return np.array(positions).T
915 |
916 | def create_disk_kernel(radius, shape):
917 | rr, cc = skimage.draw.disk((radius, radius), radius, shape=shape)
918 | kernel = np.zeros(shape, dtype=bool)
919 | kernel[rr, cc] = True
920 | return kernel
921 |
922 | def apply_median_filter(image, kernel):
923 | return scipy.ndimage.median_filter(image, footprint=kernel)
924 |
925 | def grid_anno(
926 | im,
927 | annotation_image_list,
928 | annotation_image_names,
929 | annotation_label_list,
930 | spot_to_spot,
931 | ppm_in,
932 | ppm_out,
933 | ):
934 | print(f'Generating grid with spacing - {spot_to_spot}, from annotation resolution of - {ppm_in} ppm')
935 |
936 | positions = generate_hires_grid(im, spot_to_spot, ppm_in).T # Transpose for correct orientation
937 |
938 | radius = int(round((spot_to_spot / 2 ) * ppm_in)-1)
939 | kernel = create_disk_kernel(radius, (2 * radius + 1, 2 * radius + 1))
940 |
941 | df = pd.DataFrame(positions, columns=['x', 'y'])
942 | df['index'] = df.index
943 |
944 | for idx0, anno in enumerate(annotation_image_list):
945 | anno_orig = skimage.transform.resize(anno, im.shape[:2], preserve_range=True).astype('uint8')
946 | filtered_image = apply_median_filter(anno_orig, kernel)
947 |
948 | median_values = [filtered_image[int(point[1]), int(point[0])] for point in positions]
949 | anno_dict = {idx: annotation_label_list[idx0].get(val, "Unknown") for idx, val in enumerate(median_values)}
950 | number_dict = {idx: val for idx, val in enumerate(median_values)}
951 |
952 | df[annotation_image_names[idx0]] = list(anno_dict.values())
953 | df[annotation_image_names[idx0] + '_number'] = list(number_dict.values())
954 |
955 | df['x'] = df['x'] * ppm_out / ppm_in
956 | df['y'] = df['y'] * ppm_out / ppm_in
957 | df.set_index('index', inplace=True)
958 |
959 | return df
960 |
961 |
962 | def dist2cluster_fast(df, annotation, KNN=5, logscale=False):
963 | from scipy.spatial import cKDTree
964 |
965 | print('calculating distance matrix with cKDTree')
966 |
967 | points = np.vstack([df['x'],df['y']]).T
968 | categories = np.unique(df[annotation])
969 |
970 | Dist2ClusterAll = {c: np.zeros(df.shape[0]) for c in categories}
971 |
972 | for idx, c in enumerate(categories):
973 | indextmp = df[annotation] == c
974 | if np.sum(indextmp) > KNN:
975 | print(c)
976 | cluster_points = points[indextmp]
977 | tree = cKDTree(cluster_points)
978 | # Get KNN nearest neighbors for each point
979 | distances, _ = tree.query(points, k=KNN)
980 | # Store the mean distance for each point to the current category
981 | if KNN == 1:
982 | Dist2ClusterAll[c] = distances # No need to take mean if only one neighbor
983 | else:
984 | Dist2ClusterAll[c] = np.mean(distances, axis=1)
985 |
986 | for c in categories:
987 | if logscale:
988 | df["L2_dist_log10_"+annotation+'_'+c] = np.log10(Dist2ClusterAll[c])
989 | else:
990 | df["L2_dist_"+annotation+'_'+c] = Dist2ClusterAll[c]
991 |
992 | return Dist2ClusterAll
993 |
994 | import numpy as np
995 | import pandas as pd
996 | import networkx as nx
997 | from scipy.spatial import cKDTree
998 |
999 |
1000 | def anno_to_cells(df_cells, x_col, y_col, df_grid, annotation='annotations', plot=True):
1001 | """
1002 | Maps tissue annotations to segmented cells by nearest neighbors.
1003 |
1004 | Parameters
1005 | ----------
1006 | df_cells : pandas.DataFrame
1007 | Dataframe with cell data.
1008 | x_col : str
1009 | Name of column with x coordinates in df_cells.
1010 | y_col : str
1011 | Name of column with y coordinates in df_cells.
1012 | df_grid : pandas.DataFrame
1013 | Dataframe with grid data.
1014 | annotation : str, optional
1015 | Name of the column with annotations in df_grid. Default is 'annotations'.
1016 | plot : bool, optional
1017 | If true, plots the coordinates of the grid space and the cell space to make sure
1018 | they are aligned. Default is True.
1019 |
1020 | Returns
1021 | -------
1022 | df_cells : pandas.DataFrame
1023 | Updated dataframe with cell data.
1024 | """
1025 |
1026 | print('make sure the coordinate systems are aligned e.g. axes are not flipped')
1027 | a = np.vstack([df_grid['x'], df_grid['y']])
1028 | b = np.vstack([df_cells[x_col], df_cells[y_col]])
1029 |
1030 | if plot:
1031 | plt.figure(dpi=100, figsize=[10, 10])
1032 | plt.title('cell space')
1033 | plt.plot(b[0], b[1], '.', markersize=1)
1034 | plt.show()
1035 |
1036 | df_grid_temp = df_grid.iloc[np.where(df_grid[annotation] != 'unassigned')[0], :].copy()
1037 | aa = np.vstack([df_grid_temp['x'], df_grid_temp['y']])
1038 | plt.figure(dpi=100, figsize=[10, 10])
1039 | plt.plot(aa[0], aa[1], '.', markersize=1)
1040 | plt.title('annotation space')
1041 | plt.show()
1042 |
1043 | annotations = df_grid.columns[~df_grid.columns.isin(['x', 'y'])]
1044 |
1045 | for k in annotations:
1046 | print('migrating - ' + k + ' to segmentations')
1047 | df_cells[k] = scipy.interpolate.griddata(points=a.T, values=df_grid[k], xi=b.T, method='nearest')
1048 |
1049 | return df_cells
1050 |
1051 |
1052 | def anno_to_visium_spots(df_spots, df_grid, ppm, plot=True,how='nearest',max_distance=10e10):
1053 | """
1054 | Maps tissue annotations to Visium spots according to the nearest neighbors.
1055 |
1056 | Parameters
1057 | ----------
1058 | df_spots : pandas.DataFrame
1059 | Dataframe with Visium spot data.
1060 | df_grid : pandas.DataFrame
1061 | Dataframe with grid data.
1062 | ppm : float
1063 | scale of annotation vs visium
1064 | plot : bool, optional
1065 | If true, plots the coordinates of the grid space and the spot space to make sure
1066 | they are aligned. Default is True.
1067 | how : string, optinal
1068 | This determines how the association between the 2 grids is made from the scipy.interpolate.griddata function. Default is 'nearest'
1069 | max_distance : int
1070 | maximal distance where points are not migrated
1071 |
1072 | Returns
1073 | -------
1074 | df_spots : pandas.DataFrame
1075 | Updated dataframe with Visium spot data.
1076 | """
1077 | import numpy as np
1078 | from scipy.interpolate import griddata
1079 | from scipy.spatial import cKDTree
1080 |
1081 | print('Make sure the coordinate systems are aligned, e.g., axes are not flipped.')
1082 | a = np.vstack([df_grid['x'], df_grid['y']])
1083 | b = np.vstack([df_spots['pxl_col_in_fullres'], df_spots['pxl_row_in_fullres']])*ppm
1084 |
1085 | if plot:
1086 | plt.figure(dpi=100, figsize=[10, 10])
1087 | plt.title('Spot space')
1088 | plt.plot(b[0], b[1], '.', markersize=1)
1089 | plt.show()
1090 |
1091 | plt.figure(dpi=100, figsize=[10, 10])
1092 | plt.plot(a[0], a[1], '.', markersize=1)
1093 | plt.title('Morpho space')
1094 | plt.show()
1095 |
1096 | annotations = df_grid.columns[~df_grid.columns.isin(['x', 'y'])].copy()
1097 |
1098 | for k in annotations:
1099 | print('Migrating - ' + k + ' to segmentations.')
1100 |
1101 | # Interpolation
1102 | df_spots[k] = griddata(points=a.T, values=df_grid[k], xi=b.T, method=how)
1103 |
1104 | # Create KDTree
1105 | tree = cKDTree(a.T)
1106 |
1107 | # Query tree for nearest distance
1108 | distances, _ = tree.query(b.T, distance_upper_bound=max_distance)
1109 | # Mask df_spots where the distance is too high
1110 | df_spots[k][distances==np.inf] = None
1111 | # df_spots[k] = scipy.interpolate.griddata(points=a.T, values=df_grid[k], xi=b.T, method=how)
1112 |
1113 | return df_spots
1114 |
1115 |
1116 | def plot_grid(df, annotation, spotsize=10, save=False, dpi=100, figsize=(5,5), savepath=None):
1117 | """
1118 | Plots a grid.
1119 |
1120 | Parameters
1121 | ----------
1122 | df : pandas.DataFrame
1123 | Dataframe containing data to be plotted.
1124 | annotation : str
1125 | Annotation to be used in the plot.
1126 | spotsize : int, optional
1127 | Size of the spots in the plot. Default is 10.
1128 | save : bool, optional
1129 | If true, saves the plot. Default is False.
1130 | dpi : int, optional
1131 | Dots per inch for the plot. Default is 100.
1132 | figsize : tuple, optional
1133 | Size of the figure. Default is (5,5).
1134 | savepath : str, optional
1135 | Path to save the plot. Default is None.
1136 |
1137 | Returns
1138 | -------
1139 | None
1140 | """
1141 |
1142 | plt.figure(dpi=dpi, figsize=figsize)
1143 |
1144 | ct_order = list((df[annotation].value_counts() > 0).keys())
1145 | ct_color_map = dict(zip(ct_order, np.array(sns.color_palette("colorblind", len(ct_order)))[range(len(ct_order))]))
1146 |
1147 | sns.scatterplot(x='x', y='y', hue=annotation, s=spotsize, data=df, palette=ct_color_map, hue_order=ct_order)
1148 |
1149 | plt.grid(False)
1150 | plt.title(annotation)
1151 | plt.axis('equal')
1152 |
1153 | if save:
1154 | if savepath is None:
1155 | raise ValueError('The savepath must be specified if save is True.')
1156 |
1157 | plt.savefig(savepath + '/' + annotation.replace(" ", "_") + '.pdf')
1158 |
1159 | plt.show()
1160 |
1161 |
1162 | def poly_annotator(imarray, annotation, anno_dict, plot_size=1024, use_datashader=False):
1163 | """
1164 | Interactive annotation tool with line annotations using Panel tabs for toggling between morphology and annotation.
1165 | The principle is that selecting closed/semiclosed shaped that will later be filled according to the proper annotation.
1166 |
1167 | Parameters
1168 | ----------
1169 | imarray : numpy.ndarray
1170 | Image in numpy array format.
1171 | annotation : numpy.ndarray
1172 | Label image in numpy array format.
1173 | anno_dict : dict
1174 | Dictionary of structures to annotate and colors for the structures.
1175 | plot_size: int, default=1024
1176 | Figure size for plotting.
1177 | use_datashader : Boolean, optional
1178 | If we should use datashader for rendering the image. Recommended for high resolution image. Default is False.
1179 |
1180 | Returns
1181 | -------
1182 | Panel Tabs object
1183 | A Tabs object containing the annotation and image panels.
1184 | dict
1185 | Dictionary containing the Bokeh renderers for the annotation lines.
1186 | """
1187 |
1188 | annotation_c = annotation.astype('uint8').copy()
1189 | annotation_c = np.flip(annotation_c, 0)
1190 |
1191 | imarray_c = imarray.astype('uint8').copy()
1192 | imarray_c = np.flip(imarray_c, 0)
1193 |
1194 | # Create new holoview images
1195 | anno = hv.RGB(annotation_c, bounds=(0, 0, annotation_c.shape[1], annotation_c.shape[0]))
1196 | if use_datashader:
1197 | anno = hd.regrid(anno)
1198 | ds_anno = anno.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
1199 |
1200 | img = hv.RGB(imarray_c, bounds=(0, 0, imarray_c.shape[1], imarray_c.shape[0]))
1201 | if use_datashader:
1202 | img = hd.regrid(img)
1203 | ds_img = img.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
1204 |
1205 | anno_tab_plot_list = [ds_anno]
1206 | img_tab_plot_list = [ds_img]
1207 |
1208 | render_dict = {}
1209 | path_dict = {}
1210 | for key in anno_dict.keys():
1211 | path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=3, line_alpha=0.6)
1212 | render_dict[key] = CustomPolyDraw(source=path_dict[key], num_objects=300, tooltip=key,
1213 | icon_colour=anno_dict[key])
1214 |
1215 | anno_tab_plot_list.append(path_dict[key])
1216 |
1217 | # Create the tabbed view
1218 | p = pn.Tabs(("Annotation", pn.panel(hd.Overlay(anno_tab_plot_list).collate())),
1219 | ("Image", pn.panel(hd.Overlay(img_tab_plot_list).collate())), dynamic=False)
1220 | return p, render_dict
1221 |
1222 |
1223 | def object_annotator(imarray, result, anno_dict, render_dict, alpha):
1224 | """
1225 | Extracts annotations and labels them according to brush strokes while generating out_img and the label image corrected_labels, and the anno_dict object.
1226 |
1227 | Parameters
1228 | ----------
1229 | imarray : numpy.ndarray
1230 | Image in numpy array format.
1231 | result : numpy.ndarray
1232 | Label image in numpy array format.
1233 | anno_dict : dict
1234 | Dictionary of structures to annotate and colors for the structures.
1235 | render_dict : dict
1236 | Bokeh data container.
1237 | alpha : float
1238 | Blending factor.
1239 |
1240 | Returns
1241 | -------
1242 | numpy.ndarray
1243 | Corrected label image.
1244 | dict
1245 | Dictionary containing the object colors.
1246 | """
1247 |
1248 | colorpool = ['green', 'cyan', 'brown', 'magenta', 'blue', 'red', 'orange']
1249 |
1250 | result[:] = 1
1251 | corrected_labels = result.copy()
1252 | object_dict = {'unassigned': 'yellow'}
1253 |
1254 | for idx,a in enumerate(render_dict.keys()):
1255 | if render_dict[a].data['xs']:
1256 | print(a)
1257 | for o in range(len(render_dict[a].data['xs'])):
1258 | x = np.array(render_dict[a].data['xs'][o]).astype(int)
1259 | y = np.array(render_dict[a].data['ys'][o]).astype(int)
1260 | rr, cc = polygon(y, x)
1261 | inshape = (result.shape[0]>rr) & (0cc) & (0 S2 -> S3.
1775 |
1776 | Parameters:
1777 | -----------
1778 | df_ibex : DataFrame
1779 | Input DataFrame that contains the data.
1780 | anno : str, optional
1781 | Annotation column.
1782 | structure : list of str, optional
1783 | List of structures to be meausure. [S1, S2, S3]
1784 | w : list of float, optional
1785 | List of weights between the 2 components of the axis w[0] * S1->S2 and w[1] * S2->S3. Default is [0.2,0.8].
1786 | prefix : str, optional
1787 | Prefix for the column names in DataFrame. Default is 'L2_dist_'.
1788 | output_col : str, optional
1789 | Name of the output column.
1790 |
1791 | Returns:
1792 | --------
1793 | df : DataFrame
1794 | DataFrame with calculated new column.
1795 | """
1796 | df = df_ibex.copy()
1797 | a1 = (df[prefix + anno +'_'+ structure[0]] - df[prefix + anno +'_'+ structure[1]]) \
1798 | /(df[prefix + anno +'_'+ structure[0]] + df[prefix + anno +'_'+ structure[1]])
1799 |
1800 | a2 = (df[prefix + anno +'_'+ structure[1]] - df[prefix + anno +'_'+ structure[2]]) \
1801 | /(df[prefix + anno +'_'+ structure[1]] + df[prefix + anno +'_'+ structure[2]])
1802 | df[output_col] = w[0]*a1 + w[1]*a2
1803 |
1804 | return df
1805 |
1806 |
1807 | def calculate_axis_2p(df_ibex, anno, structure, output_col, prefix='L2_dist_'):
1808 | """
1809 | Function to calculate a unimodal nomralized axis based on ordered structure of S1 -> S2 .
1810 |
1811 | Parameters:
1812 | -----------
1813 | df_ibex : DataFrame
1814 | Input DataFrame that contains the data.
1815 | anno : str, optional
1816 | Annotation column.
1817 | structure : list of str, optional
1818 | List of structures to be meausure. [S1, S2]
1819 | prefix : str, optional
1820 | Prefix for the column names in DataFrame. Default is 'L2_dist_'.
1821 | output_col : str, optional
1822 | Name of the output column.
1823 |
1824 | Returns:
1825 | --------
1826 | df : DataFrame
1827 | DataFrame with calculated new column.
1828 | """
1829 | df = df_ibex.copy()
1830 | a1 = (df[prefix + anno +'_'+ structure[0]] - df[prefix + anno +'_'+ structure[1]]) \
1831 | /(df[prefix + anno +'_'+ structure[0]] + df[prefix + anno +'_'+ structure[1]])
1832 |
1833 | df[output_col] = a1
1834 |
1835 | return df
1836 |
1837 | def bin_axis(ct_order, cutoff_values, df, axis_anno_name):
1838 | """
1839 | Bins a column of a DataFrame based on cutoff values and assigns manual bin labels.
1840 |
1841 | Parameters:
1842 | ct_order (list): The order of manual bin labels.
1843 | cutoff_values (list): The cutoff values used for binning.
1844 | df (pandas.DataFrame): The DataFrame containing the column to be binned.
1845 | axis_anno_name (str): The name of the column to be binned.
1846 |
1847 | Returns:
1848 | pandas.DataFrame: The modified DataFrame with manual bin labels assigned.
1849 | """
1850 | # Manual annotations
1851 | df['manual_bin_' + axis_anno_name] = 'unassigned'
1852 | df['manual_bin_' + axis_anno_name] = df['manual_bin_' + axis_anno_name].astype('object')
1853 | df.loc[np.array(df[axis_anno_name] < cutoff_values[0]), 'manual_bin_' + axis_anno_name] = ct_order[0]
1854 | print(ct_order[0] + '= (' + str(cutoff_values[0]) + '>' + axis_anno_name + ')')
1855 |
1856 | for idx, r in enumerate(cutoff_values[:-1]):
1857 | df.loc[np.array(df[axis_anno_name] >= cutoff_values[idx]) & np.array(df[axis_anno_name] < cutoff_values[idx+1]),
1858 | 'manual_bin_' + axis_anno_name] = ct_order[idx+1]
1859 | print(ct_order[idx+1] + '= (' + str(cutoff_values[idx]) + '<=' + axis_anno_name + ') & (' + str(cutoff_values[idx+1]) + '>' + axis_anno_name + ')' )
1860 |
1861 | df.loc[np.array(df[axis_anno_name] >= cutoff_values[-1]), 'manual_bin_' + axis_anno_name] = ct_order[-1]
1862 | print(ct_order[-1] + '= (' + str(cutoff_values[-1]) + '=<' + axis_anno_name + ')')
1863 |
1864 | df['manual_bin_' + axis_anno_name] = df['manual_bin_' + axis_anno_name].astype('category')
1865 | df['manual_bin_' + axis_anno_name + '_int'] = df['manual_bin_' + axis_anno_name].cat.codes
1866 |
1867 | return df
1868 |
1869 |
1870 | def plot_cont(data, x_col='centroid-1', y_col='centroid-0', color_col='L2_dist_annotation_tissue_Edge',
1871 | cmap='jet', title='L2_dist_annotation_tissue_Edge', s=1, dpi=100, figsize=[10,10]):
1872 | plt.figure(dpi=dpi, figsize=figsize)
1873 |
1874 | # Create an axes instance for the scatter plot
1875 | ax = plt.subplot(111)
1876 |
1877 | # Create the scatterplot
1878 | scatter = sns.scatterplot(x=x_col, y=y_col, data=data,
1879 | c=data[color_col], cmap=cmap, s=s,
1880 | legend=False, ax=ax) # Use the created axes
1881 |
1882 | plt.grid(False)
1883 | plt.axis('equal')
1884 | plt.title(title)
1885 | for pos in ['right', 'top', 'bottom', 'left']:
1886 | ax.spines[pos].set_visible(False)
1887 |
1888 | # Add colorbar
1889 | norm = plt.Normalize(data[color_col].min(), data[color_col].max())
1890 | sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
1891 | sm.set_array([])
1892 | cbar = plt.colorbar(sm, ax=ax, label=title, aspect=30) # Use the created axes for the colorbar
1893 | cbar.ax.set_position([0.85, 0.25, 0.05, 0.5]) # adjust the position as needed
1894 |
1895 |
1896 | plt.show()
1897 |
1898 |
1899 |
1900 | def annotator_fun(imarray, labels, anno_dict, plot_size=1024,invert_y=False,use_datashader=False,alpha=0.7):
1901 | """
1902 | Interactive annotation tool with line annotations using Panel tabs for toggling between morphology and annotation. this version also supports adding an external function
1903 |
1904 | Parameters
1905 | ----------
1906 | imarray: np.array
1907 | Image in numpy array format.
1908 | labels: np.array
1909 | Label image in numpy array format.
1910 | anno_dict: dict
1911 | Dictionary of structures to annotate and colors for the structures.
1912 | plot_size: int, default=1024
1913 | Figure size for plotting.
1914 | invert_y :boolean
1915 | invert plot along y axis
1916 | use_datashader : Boolean, optional
1917 | If we should use datashader for rendering the image. Recommended for high resolution image. Default is False.
1918 | alpha
1919 | blending extent of "Annotation" tab
1920 |
1921 | Returns
1922 | -------
1923 | Panel Tabs object
1924 | A Tabs object containing the annotation and image panels.
1925 | dict
1926 | Dictionary of Bokeh renderers for each annotation.
1927 | """
1928 | import logging
1929 | logging.getLogger('bokeh.core.validation.check').setLevel(logging.ERROR)
1930 |
1931 | def create_images(labels, imarray):
1932 | # convert label image to rgb for annotation
1933 | labels_rgb = rgb_from_labels(labels, colors=list(anno_dict.values()))
1934 | annotation = overlay_labels(imarray,labels_rgb,alpha=alpha,show=False)
1935 |
1936 | annotation_c = annotation.astype('uint8').copy()
1937 | if not invert_y:
1938 | annotation_c = np.flip(annotation_c, 0)
1939 |
1940 | imarray_c = imarray.astype('uint8').copy()
1941 | if not invert_y:
1942 | imarray_c = np.flip(imarray_c, 0)
1943 |
1944 | # Create new holoview images
1945 | im_1 = hv.RGB(imarray_c, bounds=(0, 0, imarray_c.shape[1], imarray_c.shape[0]))
1946 | if use_datashader:
1947 | im_1 = hd.regrid(im_1)
1948 | ds_im_1 = im_1.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
1949 |
1950 | anno_1 = hv.RGB(annotation_c, bounds=(0, 0, annotation_c.shape[1], annotation_c.shape[0]))
1951 | if use_datashader:
1952 | anno_1 = hd.regrid(anno_1)
1953 | ds_anno_1 = anno_1.options(aspect="equal", frame_height=int(plot_size), frame_width=int(plot_size))
1954 | return ds_im_1, ds_anno_1
1955 |
1956 | def create_tools(im_tab_plot_list, anno_tab_plot_list):
1957 | im_render_dict = {}
1958 | anno_render_dict = {}
1959 | im_path_dict = {}
1960 | anno_path_dict = {}
1961 | for key in anno_dict.keys():
1962 | im_path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
1963 | im_render_dict[key] = CustomFreehandDraw(source=im_path_dict[key], num_objects=200, tooltip=key,
1964 | icon_colour=anno_dict[key])
1965 |
1966 | anno_path_dict[key] = hv.Path([]).opts(color=anno_dict[key], line_width=5, line_alpha=0.4)
1967 | anno_render_dict[key] = CustomFreehandDraw(source=anno_path_dict[key], num_objects=200, tooltip=key,
1968 | icon_colour=anno_dict[key])
1969 |
1970 | SynchronisedFreehandDrawLink(im_path_dict[key], anno_path_dict[key])
1971 | im_tab_plot_list.append(im_path_dict[key])
1972 | anno_tab_plot_list.append(anno_path_dict[key])
1973 | return im_render_dict, im_path_dict, anno_path_dict, anno_render_dict
1974 |
1975 | def update(x):
1976 | labels = labels_l[0]
1977 | anno_dict = anno_dict_l[0]
1978 | render_dict = render_dict_l[0]
1979 | loader.visible = True
1980 |
1981 | labels = update_annotator(imarray, labels, anno_dict, render_dict, alpha=0.5,plot=False, copy=False)
1982 | ds_im_1, ds_anno_1 = create_images(labels, imarray)
1983 | im_tab_plot_list = [ds_im_1]
1984 | anno_tab_plot_list = [ds_anno_1]
1985 | render_dict, path_dict, img_path_dict, img_render_dict = create_tools(im_tab_plot_list,anno_tab_plot_list)
1986 |
1987 |
1988 | pa.object = hd.Overlay(im_tab_plot_list).collate()
1989 | pa1.object = hd.Overlay(anno_tab_plot_list).collate()
1990 | pt = pn.Tabs(("Image", pa),("Annotation", pa1),dynamic=False)
1991 | p[0].object = pt
1992 | #pt[1][1].object = pn.panel(hd.Overlay(img_tab_plot_list).collate())
1993 | labels_l[0] = labels
1994 | anno_dict_l[0] = anno_dict
1995 | render_dict_l[0] = render_dict
1996 |
1997 | loader.visible = False
1998 | # Create the tabbed view
1999 |
2000 | labels_l = labels
2001 | ds_im_1, ds_anno_1 = create_images(labels_l[0], imarray)
2002 | im_tab_plot_list = [ds_im_1]
2003 | anno_tab_plot_list = [ds_anno_1]
2004 | render_dict, path_dict, img_path_dict, img_render_dict = create_tools(im_tab_plot_list,anno_tab_plot_list)
2005 |
2006 |
2007 | anno_dict_l = [anno_dict.copy()]
2008 | render_dict_l = [render_dict.copy()]
2009 |
2010 | pa = pn.panel(hd.Overlay(im_tab_plot_list).collate())
2011 | pa1 = pn.panel(hd.Overlay(anno_tab_plot_list).collate())
2012 | pt = pn.Tabs(("Image", pa),("Annotation", pa1),dynamic=False)
2013 |
2014 | button = pn.widgets.Button(name="Update annotations")
2015 | button.on_click(callback=update)
2016 | loader = pn.indicators.LoadingSpinner(value=True, size=20, name="Updating annotation...")
2017 | loader.visible = False
2018 | p = pn.Column(pn.Row(button, loader), pt)#.servable()
2019 | return p, render_dict#, pt
2020 |
2021 |
--------------------------------------------------------------------------------