├── .gitignore
├── CITATION.cff
├── README.md
├── code_demo
├── README
├── dry_bin2.tif
├── flow_image.tif
├── requirements.txt
├── segmentation_demo_3D.ipynb
├── utils.py
└── wet_bin2.tif
├── flow_segmentation_3D.ipynb
├── flow_segmentation_3D_nihal.ipynb
├── image_segmentation_3D.ipynb
├── image_segmentation_3D_cat.ipynb
├── image_segmentation_3D_nihal.ipynb
├── requirements.txt
├── segmented_analysis.ipynb
└── utils.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .env
--------------------------------------------------------------------------------
/CITATION.cff:
--------------------------------------------------------------------------------
1 | cff-version: 1.2.0
2 | message: "If you use this work in your research, please cite it as follows."
3 | authors:
4 | - family-names: "Catherine"
5 | given-names: "Spurin"
6 | - family-names: "Ellman"
7 | given-names: "Sharon"
8 | - family-names: "Sherburn"
9 | given-names: "Dane"
10 | - family-names: "Bultreys"
11 | given-names: "Tom"
12 | - family-names: "Tchelepi"
13 | given-names: "Hamdi"
14 | title: "Python workflow for segmenting multiphase flow in porous rocks"
15 | version: 1.0.0
16 | date-released: 2024-03-30
17 | url: "https://github.com/cspurin/image_processing"
18 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Image Processing Workflow
2 | Python notebook that uses skimage packages to segment an image. Can be used as a subsitute to Avizo.
3 | The paper associated with this work is: https://doi.org/10.31223/X58D69
4 |
5 | ## Guide to getting the code running
6 | Recommended steps for running the notebook.
7 |
8 | 1. Download and install vscode: https://code.visualstudio.com/
9 | 2. Download and install python: https://www.python.org/downloads/ (download version 3.10.11)
10 | 3. Install the python extension in vscode. In vscode, click the extensions icon in the sidebar and search for "python"
11 | 4. Open a new folder (link it the folder containing the contents of this repository, wherever that is on your computer)
12 | 5. Create a virtual enviornment. In vscode click View>Terminal and enter the code below:
13 | python -m venv myenv
14 | Then for Mac enter: source myenv/bin/activate
15 | or for Windows: myenv\Scripts\activate
16 | 7. Install the necessary packages by typing this in the terminal:
17 | pip install -r requirements.txt
18 | NB must be in the folder where the requirements.txt file is. Alternatively, you can install the packages individually e.g. pip install numpy.
19 | Now open the notebook (wavelet_plotting_GRL_paper.ipynb) and begin!
20 |
21 | ## Pore space segmentation
22 | This is done with image_segmentation_3D.ipynb. Here we segement the pore space using watershed segmentation.
23 |
24 | The pore space is loaded and cropped to remove everything that isn't the core (i.e. no sleeve or core holder in the images):
25 | 
26 |
27 | The data is filtered using a non-local means filter:
28 | 
29 |
30 | Then the image is segmented using a watershed segmentation. Values for this are chosen by the user:
31 | 
32 |
33 |
34 | ## Flow segmentation
35 | This is done with flow_segmentation_3D.ipynb. Here we segment the gas in the images using differential imaging and a simple threshold. This requires an image with only brine in the pore space prior to the injeciton of the gas (or oil).
36 |
37 | The wet image is cropped and registered to the dry scan.
38 | The flow image is cropped and registered to the dry scan.
39 |
40 | The images are filtered using the non-local means filter:
41 | 
42 |
43 | We subtract the wet image from the flow image to get the location of the gas:
44 | 
45 |
46 | The segmented dry scan can get the location of the water in the final segmentation of the images.
47 |
48 |
--------------------------------------------------------------------------------
/code_demo/README:
--------------------------------------------------------------------------------
1 | # Image Processing Workflow
2 | Python notebook that uses skimage packages to segment an image. Can be used as a subsitute to Avizo.
3 | The paper associated with this work is: https://doi.org/10.31223/X58D69
4 |
5 | ## Guide to getting the code running
6 | Recommended steps for running the notebook.
7 |
8 | 1. Download and install vscode: https://code.visualstudio.com/
9 | 2. Download and install python: https://www.python.org/downloads/ (download version 3.10.11)
10 | 3. Install the python extension in vscode. In vscode, click the extensions icon in the sidebar and search for "python"
11 | 4. Open a new folder (link it the folder containing the contents of this repository, wherever that is on your computer)
12 | 5. Create a virtual enviornment. In vscode click View>Terminal and enter the code below:
13 | python -m venv myenv
14 | Then for Mac enter: source myenv/bin/activate
15 | or for Windows: myenv\Scripts\activate
16 | 7. Install the necessary packages by typing this in the terminal:
17 | pip install -r requirements.txt
18 | NB must be in the folder where the requirements.txt file is. Alternatively, you can install the packages individually e.g. pip install numpy.
19 | Now open the notebook and begin!
20 |
21 |
22 |
--------------------------------------------------------------------------------
/code_demo/dry_bin2.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cspurin/image_processing/df46a595bed7c852572124d5085fceeae8a642ff/code_demo/dry_bin2.tif
--------------------------------------------------------------------------------
/code_demo/flow_image.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cspurin/image_processing/df46a595bed7c852572124d5085fceeae8a642ff/code_demo/flow_image.tif
--------------------------------------------------------------------------------
/code_demo/requirements.txt:
--------------------------------------------------------------------------------
1 | alabaster==1.0.0
2 | annotated-types==0.7.0
3 | app-model==0.2.8
4 | appdirs==1.4.4
5 | appnope==0.1.3
6 | asttokens==2.2.1
7 | attrs==24.2.0
8 | babel==2.16.0
9 | backcall==0.2.0
10 | brokenaxes==0.6.2
11 | build==1.2.2
12 | cachey==0.2.1
13 | certifi==2024.8.30
14 | charset-normalizer==3.3.2
15 | click==8.1.7
16 | cloudpickle==3.0.0
17 | comm==0.1.3
18 | contourpy==1.0.7
19 | cycler==0.11.0
20 | dask==2024.9.0
21 | debugpy==1.6.7
22 | decorator==5.1.1
23 | docstring_parser==0.16
24 | docutils==0.21.2
25 | einops==0.6.1
26 | executing==1.2.0
27 | flexcache==0.3
28 | flexparser==0.3.1
29 | fonttools==4.39.3
30 | freetype-py==2.5.1
31 | fsspec==2024.9.0
32 | HeapDict==1.0.1
33 | hsluv==5.0.4
34 | idna==3.10
35 | imageio==2.28.1
36 | imagesize==1.4.1
37 | importlib_metadata==8.5.0
38 | in-n-out==0.2.1
39 | ipykernel==6.22.0
40 | ipython==8.13.1
41 | ipywidgets==8.1.5
42 | jedi==0.18.2
43 | Jinja2==3.1.4
44 | jsonschema==4.23.0
45 | jsonschema-specifications==2023.12.1
46 | jupyter_client==8.2.0
47 | jupyter_core==5.3.0
48 | jupyterlab_widgets==3.0.13
49 | kiwisolver==1.4.4
50 | lazy_loader==0.2
51 | locket==1.0.0
52 | magicgui==0.9.1
53 | markdown-it-py==3.0.0
54 | MarkupSafe==2.1.5
55 | matplotlib==3.7.1
56 | matplotlib-inline==0.1.6
57 | mdurl==0.1.2
58 | napari==0.5.3
59 | napari-console==0.0.9
60 | napari-plugin-engine==0.2.0
61 | napari-svg==0.2.0
62 | nest-asyncio==1.5.6
63 | networkx==3.1
64 | npe2==0.7.7
65 | numpy==1.24.3
66 | numpydoc==1.8.0
67 | packaging==23.1
68 | pandas==2.2.3
69 | parso==0.8.3
70 | partd==1.4.2
71 | pexpect==4.8.0
72 | pickleshare==0.7.5
73 | Pillow==9.5.0
74 | Pint==0.24.3
75 | platformdirs==3.5.0
76 | pooch==1.8.2
77 | prompt-toolkit==3.0.38
78 | psutil==5.9.5
79 | psygnal==0.11.1
80 | ptyprocess==0.7.0
81 | pure-eval==0.2.2
82 | pyconify==0.1.6
83 | pydantic==2.9.2
84 | pydantic-compat==0.1.2
85 | pydantic_core==2.23.4
86 | Pygments==2.18.0
87 | PyOpenGL==3.1.7
88 | pyparsing==3.0.9
89 | pyproject_hooks==1.1.0
90 | python-dateutil==2.8.2
91 | pytz==2024.2
92 | pyvista==0.44.1
93 | PyWavelets==1.4.1
94 | PyYAML==6.0.2
95 | pyzmq==25.0.2
96 | qtconsole==5.6.0
97 | QtPy==2.4.1
98 | referencing==0.35.1
99 | requests==2.32.3
100 | rich==13.8.1
101 | rpds-py==0.20.0
102 | scikit-image==0.20.0
103 | scipy==1.10.1
104 | scooby==0.10.0
105 | shellingham==1.5.4
106 | six==1.16.0
107 | snowballstemmer==2.2.0
108 | Sphinx==8.0.2
109 | sphinxcontrib-applehelp==2.0.0
110 | sphinxcontrib-devhelp==2.0.0
111 | sphinxcontrib-htmlhelp==2.1.0
112 | sphinxcontrib-jsmath==1.0.1
113 | sphinxcontrib-qthelp==2.0.0
114 | sphinxcontrib-serializinghtml==2.0.0
115 | stack-data==0.6.2
116 | superqt==0.6.7
117 | tabulate==0.9.0
118 | tifffile==2023.4.12
119 | tomli_w==1.0.0
120 | toolz==0.12.1
121 | tornado==6.3.1
122 | tqdm==4.66.5
123 | traitlets==5.9.0
124 | typer==0.12.5
125 | typing_extensions==4.12.2
126 | tzdata==2024.2
127 | urllib3==2.2.3
128 | vispy==0.14.3
129 | vtk==9.3.1
130 | wcwidth==0.2.6
131 | widgetsnbextension==4.0.13
132 | wrapt==1.16.0
133 | zipp==3.20.2
134 |
--------------------------------------------------------------------------------
/code_demo/utils.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import einops
3 | import os
4 | import PIL
5 | from PIL import Image, ImageDraw
6 |
7 | #loading images
8 | def load(dirname, start_slice, slices):
9 | flow_data = []
10 |
11 | fname = (os.listdir(dirname))
12 | for i in range(start_slice, start_slice + slices):
13 | im = Image.open(os.path.join(dirname, fname[i]))
14 | imarray = np.array(im)
15 | flow_data.append(imarray)
16 |
17 | # convert to a 3D array and normalise so data is between 0 and 1
18 | flow_data = np.asarray(flow_data)
19 | flow_data = preprocess(flow_data)
20 | return(flow_data)
21 |
22 |
23 | def preprocess(img: np.ndarray, normalize_axis=None) -> np.ndarray:
24 | """Converts image to 8-bit image. Optionally normalizes along `normalize_axis`."""
25 |
26 | assert isinstance(img, np.ndarray)
27 | assert img.dtype in (np.uint8, np.uint16, np.float32, np.float64), f"Unsupported dtype: {img.dtype}"
28 | assert normalize_axis is None or 0 <= normalize_axis % img.ndim < img.ndim
29 |
30 | if img.dtype in (np.float32, np.float64):
31 | img = (img - img.min()) / (img.max() - img.min()) * 255
32 | #mn = np.min(img, axis=normalize_axis, keepdims=True)
33 | #mx = np.max(img, axis=normalize_axis, keepdims=True)
34 | #np.subtract(img, mn, out=img) # img = img - img.min()
35 | #np.divide(img, mx - mn, out=img)
36 | #np.multiply(img, 255, out=img)
37 | img = img.astype(np.uint8)
38 |
39 | if img.dtype == np.uint16:
40 | img = (img - img.min()) / (img.max() - img.min()) * 255
41 | #mn = np.min(img, axis=normalize_axis, keepdims=True)
42 | #mx = np.max(img, axis=normalize_axis, keepdims=True)
43 | #np.subtract(img, mn, out=img) # img = img - img.min()
44 | #np.divide(img, mx - mn, out=img)
45 | #np.multiply(img, 255, out=img)
46 | img = img.astype(np.uint8)
47 |
48 | assert isinstance(img, np.ndarray)
49 | assert img.dtype == np.uint8
50 |
51 | return img
52 |
53 |
54 | # function to mask with the dry scan
55 | def mask_with_dry(img, dry_scan):
56 | if img.max() > 1:
57 | img = (img - img.min()) / (img.max() - img.min()) * 255
58 | img = img.astype(np.uint8)
59 |
60 | # creating mask from segmented dry scan
61 | mask = (dry_scan == 0)
62 |
63 | assert img.shape == mask.shape
64 | assert mask.dtype == np.bool8
65 | # mask image
66 | foreground = img.copy()
67 | foreground[mask] = 255
68 |
69 | # create a composite image using the alpha layer
70 | masked_img = np.array(foreground, dtype=np.uint8)
71 | return masked_img
72 |
73 |
74 | def sanity_check(img: np.ndarray, mask: np.ndarray, alpha: float = 0.3) -> np.ndarray:
75 | assert isinstance(mask, np.ndarray)
76 | assert mask.dtype == np.bool8
77 | assert isinstance(img, np.ndarray)
78 | assert img.dtype == np.uint8
79 | assert len(mask.shape) == 2
80 | assert len(img.shape) == 2 # grayscale
81 |
82 | background = einops.repeat(img, "h w -> h w c", c=3) # grayscale to rgb
83 |
84 | foreground = background.copy()
85 | foreground[mask] = [255, 0, 0]
86 |
87 | foreground = foreground.astype(np.float16)
88 | background = background.astype(np.float16)
89 |
90 | composite = background * (1.0 - alpha) + foreground * alpha
91 | composite = np.array(composite, dtype=np.uint8)
92 |
93 | assert isinstance(composite, np.ndarray)
94 | assert composite.dtype == np.uint8
95 | assert len(composite.shape) == 3 # rgb
96 |
97 | return composite
98 |
99 |
100 | def simple_thresholding(img: np.array, min_threshold: float, max_threshold: float) -> np.array:
101 | return ((img.max() - img.min()) * min_threshold + img.min() <= img) & (img <= (img.max() - img.min()) * max_threshold + img.min())
102 |
--------------------------------------------------------------------------------
/code_demo/wet_bin2.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cspurin/image_processing/df46a595bed7c852572124d5085fceeae8a642ff/code_demo/wet_bin2.tif
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | scikit-learn
2 | skimage
3 | napari
4 | jupyter
5 | Pillow
6 | ipywidgets
7 | scikit-image
8 | matplotlib
9 | einops
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import einops
3 | import os
4 | import PIL
5 | from PIL import Image, ImageDraw
6 |
7 | #loading images
8 | def load(dirname, start_slice, slices):
9 | flow_data = []
10 |
11 | fname = (os.listdir(dirname))
12 | for i in range(start_slice, start_slice + slices):
13 | im = Image.open(os.path.join(dirname, fname[i]))
14 | imarray = np.array(im)
15 | flow_data.append(imarray)
16 |
17 | # convert to a 3D array and normalise so data is between 0 and 1
18 | flow_data = np.asarray(flow_data)
19 | flow_data = preprocess(flow_data)
20 | return(flow_data)
21 |
22 |
23 | def preprocess(img: np.ndarray, normalize_axis=None) -> np.ndarray:
24 | """Converts image to 8-bit image. Optionally normalizes along `normalize_axis`."""
25 |
26 | assert isinstance(img, np.ndarray)
27 | assert img.dtype in (np.uint8, np.uint16, np.float32, np.float64), f"Unsupported dtype: {img.dtype}"
28 | assert normalize_axis is None or 0 <= normalize_axis % img.ndim < img.ndim
29 |
30 | if img.dtype in (np.float32, np.float64):
31 | img = (img - img.min()) / (img.max() - img.min()) * 255
32 | #mn = np.min(img, axis=normalize_axis, keepdims=True)
33 | #mx = np.max(img, axis=normalize_axis, keepdims=True)
34 | #np.subtract(img, mn, out=img) # img = img - img.min()
35 | #np.divide(img, mx - mn, out=img)
36 | #np.multiply(img, 255, out=img)
37 | img = img.astype(np.uint8)
38 |
39 | if img.dtype == np.uint16:
40 | img = (img - img.min()) / (img.max() - img.min()) * 255
41 | #mn = np.min(img, axis=normalize_axis, keepdims=True)
42 | #mx = np.max(img, axis=normalize_axis, keepdims=True)
43 | #np.subtract(img, mn, out=img) # img = img - img.min()
44 | #np.divide(img, mx - mn, out=img)
45 | #np.multiply(img, 255, out=img)
46 | img = img.astype(np.uint8)
47 |
48 | assert isinstance(img, np.ndarray)
49 | assert img.dtype == np.uint8
50 |
51 | return img
52 |
53 |
54 | # function to mask with the dry scan
55 | def mask_with_dry(img, dry_scan):
56 | if img.max() > 1:
57 | img = (img - img.min()) / (img.max() - img.min()) * 255
58 | img = img.astype(np.uint8)
59 |
60 | # creating mask from segmented dry scan
61 | mask = (dry_scan == 0)
62 |
63 | assert img.shape == mask.shape
64 | assert mask.dtype == np.bool8
65 | # mask image
66 | foreground = img.copy()
67 | foreground[mask] = 255
68 |
69 | # create a composite image using the alpha layer
70 | masked_img = np.array(foreground, dtype=np.uint8)
71 | return masked_img
72 |
73 |
74 | def sanity_check(img: np.ndarray, mask: np.ndarray, alpha: float = 0.3) -> np.ndarray:
75 | assert isinstance(mask, np.ndarray)
76 | assert mask.dtype == np.bool8
77 | assert isinstance(img, np.ndarray)
78 | assert img.dtype == np.uint8
79 | assert len(mask.shape) == 2
80 | assert len(img.shape) == 2 # grayscale
81 |
82 | background = einops.repeat(img, "h w -> h w c", c=3) # grayscale to rgb
83 |
84 | foreground = background.copy()
85 | foreground[mask] = [255, 0, 0]
86 |
87 | foreground = foreground.astype(np.float16)
88 | background = background.astype(np.float16)
89 |
90 | composite = background * (1.0 - alpha) + foreground * alpha
91 | composite = np.array(composite, dtype=np.uint8)
92 |
93 | assert isinstance(composite, np.ndarray)
94 | assert composite.dtype == np.uint8
95 | assert len(composite.shape) == 3 # rgb
96 |
97 | return composite
98 |
99 |
100 | def simple_thresholding(img: np.array, min_threshold: float, max_threshold: float) -> np.array:
101 | return ((img.max() - img.min()) * min_threshold + img.min() <= img) & (img <= (img.max() - img.min()) * max_threshold + img.min())
102 |
--------------------------------------------------------------------------------