├── Code ├── ch1.ipynb ├── ch2.ipynb ├── ch3.ipynb └── ch4.ipynb ├── LICENSE ├── README.md └── Theory ├── chapter1.pdf ├── chapter2.pdf ├── chapter3.pdf └── chapter4.pdf /Code/ch1.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "Load images\n", 8 | "\n", 9 | "In this chapter, we'll work with sections of a computed tomography (CT) scan from The Cancer Imaging Archive. CT uses a rotating X-ray tube to create a 3D image of the target area.\n", 10 | "\n", 11 | "The actual content of the image depends on the instrument used: photographs measure visible light, x-ray and CT measure radiation absorbance, and MRI scanners measure magnetic fields.\n", 12 | "\n", 13 | "To warm up, use the imageio package to load a single DICOM image from the scan volume and check out a few of its attributes." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "# Import ImageIO\n", 23 | "import imageio\n", 24 | "\n", 25 | "# Load \"chest-220.dcm\"\n", 26 | "im = imageio.imread(\"chest-220.dcm\")\n", 27 | "\n", 28 | "# Print image attributes\n", 29 | "print('Image type:', type(im))\n", 30 | "print('Shape of image array:', im.shape)" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "Metadata\n", 38 | "\n", 39 | "ImageIO reads in data as Image objects. These are standard NumPy arrays with a dictionary of metadata.\n", 40 | "\n", 41 | "Metadata can be quite rich in medical images and can include:\n", 42 | "\n", 43 | " Patient demographics: name, age, sex, clinical information\n", 44 | " Acquisition information: image shape, sampling rates, data type, modality (such as X-Ray, CT or MRI)\n", 45 | "\n", 46 | "Start this exercise by reading in the chest image and listing the available fields in the meta dictionary.\n", 47 | "\n", 48 | "After reading in the image, use im.meta to select the true statement from the list below." 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": null, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "# Import ImageIO and load image\n", 58 | "import imageio\n", 59 | "im = imageio.imread(\"chest-220.dcm\")\n", 60 | "\n", 61 | "# Print the available metadata fields\n", 62 | "print(im.meta.keys())" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "metadata": {}, 68 | "source": [ 69 | "Plot images\n", 70 | "\n", 71 | "Perhaps the most critical principle of image analysis is: look at your images!\n", 72 | "\n", 73 | "Matplotlib's imshow() function gives you a simple way to do this. Knowing a few simple arguments will help:\n", 74 | "\n", 75 | " cmap controls the color mappings for each value. The \"gray\" colormap is common, but many others are available.\n", 76 | " vmin and vmax control the color contrast between values. Changing these can reduce the influence of extreme values.\n", 77 | " plt.axis('off') removes axis and tick labels from the image.\n", 78 | "\n", 79 | "For this exercise, plot the CT scan and investigate the effect of a few different parameters." 80 | ] 81 | }, 82 | { 83 | "cell_type": "code", 84 | "execution_count": null, 85 | "metadata": {}, 86 | "outputs": [], 87 | "source": [ 88 | "# Import ImageIO and PyPlot \n", 89 | "import imageio\n", 90 | "import matplotlib.pyplot as plt\n", 91 | "\n", 92 | "# Read in \"chest-220.dcm\"\n", 93 | "im = imageio.imread(\"chest-220.dcm\")\n", 94 | "# Draw the image in grayscale\n", 95 | "plt.imshow(im, cmap='gray')\n", 96 | "\n", 97 | "# Draw the image with greater contrast\n", 98 | "plt.imshow(im, cmap='gray',vmin=-200,vmax=200)\n", 99 | "\n", 100 | "# Remove axis ticks and labels\n", 101 | "plt.axis('off')\n", 102 | "\n", 103 | "# Render the image\n", 104 | "plt.show()" 105 | ] 106 | }, 107 | { 108 | "cell_type": "markdown", 109 | "metadata": {}, 110 | "source": [ 111 | "Stack images\n", 112 | "\n", 113 | "Image \"stacks\" are a useful metaphor for understanding multi-dimensional data. Each higher dimension is a stack of lower dimensional arrays.\n", 114 | "\n", 115 | "In this exercise, we will use NumPy's stack() function to combine several 2D arrays into a 3D volume. By convention, volumetric data should be stacked along the first dimension: vol[plane, row, col].\n", 116 | "\n", 117 | "Note: performing any operations on an ImageIO Image object will convert it to a numpy.ndarray, stripping its metadata." 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": null, 123 | "metadata": {}, 124 | "outputs": [], 125 | "source": [ 126 | "# Import ImageIO and NumPy\n", 127 | "import imageio\n", 128 | "import numpy as np\n", 129 | "\n", 130 | "# Read in each 2D image\n", 131 | "im1 = imageio.imread('chest-220.dcm')\n", 132 | "im2 = imageio.imread('chest-221.dcm')\n", 133 | "im3 = imageio.imread('chest-222.dcm')\n", 134 | "\n", 135 | "# Stack images into a volume\n", 136 | "vol = np.stack([im1,im2,im3],axis=0)\n", 137 | "print('Volume dimensions:', vol.shape)" 138 | ] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "metadata": {}, 143 | "source": [ 144 | "Load volumes\n", 145 | "\n", 146 | "ImageIO's volread() function can load multi-dimensional datasets and create 3D volumes from a folder of images. It can also aggregate metadata across these multiple images.\n", 147 | "\n", 148 | "For this exercise, read in an entire volume of brain data from the \"tcia-chest-ct\" folder, which contains 25 DICOM images." 149 | ] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "execution_count": null, 154 | "metadata": {}, 155 | "outputs": [], 156 | "source": [ 157 | "# Import ImageIO\n", 158 | "import imageio\n", 159 | "\n", 160 | "# Load the \"tcia-chest-ct\" directory\n", 161 | "vol = imageio.volread(\"tcia-chest-ct\")\n", 162 | "\n", 163 | "# Print image attributes\n", 164 | "print('Available metadata:', vol.meta.keys())\n", 165 | "print('Shape of image array:', vol.shape)" 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | "Field of view\n", 173 | "\n", 174 | "The amount of physical space covered by an image is its field of view, which is calculated from two properties:\n", 175 | "\n", 176 | " Array shape, the number of data elements on each axis. Can be accessed with the shape attribute.\n", 177 | " Sampling resolution, the amount of physical space covered by each pixel. Sometimes available in metadata (e.g., meta['sampling']).\n", 178 | "\n", 179 | "For this exercise, multiply the array shape and sampling resolution along each axis to calculate the field of view of vol. All values are in millimeters." 180 | ] 181 | }, 182 | { 183 | "cell_type": "markdown", 184 | "metadata": {}, 185 | "source": [ 186 | "Generate subplots\n", 187 | "\n", 188 | "You can draw multiple images in one figure to explore data quickly. Use plt.subplots() to generate an array of subplots.\n", 189 | "\n", 190 | "fig, axes = plt.subplots(nrows=2, ncols=2)\n", 191 | "\n", 192 | "To draw an image on a subplot, call the plotting method directly from the subplot object rather than through PyPlot: axes[0,0].imshow(im) rather than plt.imshow(im).\n", 193 | "\n", 194 | "For this exercise, draw im1 and im2 on separate subplots within the same figure." 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": null, 200 | "metadata": {}, 201 | "outputs": [], 202 | "source": [ 203 | "# Import PyPlot\n", 204 | "import matplotlib.pyplot as plt\n", 205 | "\n", 206 | "# Initialize figure and axes grid\n", 207 | "fig, axes = plt.subplots(nrows=2, ncols=1)\n", 208 | "\n", 209 | "# Draw an image on each subplot\n", 210 | "axes[0].imshow(im1, cmap='gray')\n", 211 | "axes[1].imshow(im2, cmap='gray')\n", 212 | "\n", 213 | "# Remove ticks/labels and render\n", 214 | "axes[0].axis('off')\n", 215 | "axes[1].axis('off')\n", 216 | "\n", 217 | "plt.show()" 218 | ] 219 | }, 220 | { 221 | "cell_type": "markdown", 222 | "metadata": {}, 223 | "source": [ 224 | "Slice 3D images\n", 225 | "\n", 226 | "The simplest way to plot 3D and 4D images by slicing them into many 2D frames. Plotting many slices sequentially can create a \"fly-through\" effect that helps you understand the image as a whole.\n", 227 | "\n", 228 | "To select a 2D frame, pick a frame for the first axis and select all data from the remaining two: vol[0, :, :]\n", 229 | "\n", 230 | "For this exercise, use for loop to plot every 40th slice of vol on a separate subplot. matplotlib.pyplot (as plt) has been imported for you." 231 | ] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "execution_count": null, 236 | "metadata": {}, 237 | "outputs": [], 238 | "source": [ 239 | "# Plot the images on a subplots array \n", 240 | "fig, axes = plt.subplots(nrows=1, ncols=4)\n", 241 | "\n", 242 | "# Loop through subplots and draw image\n", 243 | "for ii in range(4):\n", 244 | " im = vol[ii*40,:,:]\n", 245 | " axes[ii].imshow(im, cmap='gray')\n", 246 | " axes[ii].axis('off')\n", 247 | " \n", 248 | "# Render the figure\n", 249 | "plt.show()" 250 | ] 251 | }, 252 | { 253 | "cell_type": "markdown", 254 | "metadata": {}, 255 | "source": [ 256 | "Plot other views\n", 257 | "\n", 258 | "Any two dimensions of an array can form an image, and slicing along different axes can provide a useful perspective. However, unequal sampling rates can create distorted images.\n", 259 | "\n", 260 | "Changing the aspect ratio can address this by increasing the width of one of the dimensions.\n", 261 | "\n", 262 | "For this exercise, plot images that slice along the second and third dimensions of vol. Explicitly set the aspect ratio to generate undistorted images." 263 | ] 264 | }, 265 | { 266 | "cell_type": "code", 267 | "execution_count": null, 268 | "metadata": {}, 269 | "outputs": [], 270 | "source": [ 271 | "# Select frame from \"vol\"\n", 272 | "im1 = vol[:, 256, :]\n", 273 | "im2 = vol[:,:,256]\n", 274 | "\n", 275 | "# Compute aspect ratios\n", 276 | "d0, d1, d2 = vol.meta['sampling']\n", 277 | "asp1 = d0 / d2\n", 278 | "asp2 = d0/d1\n", 279 | "\n", 280 | "# Plot the images on a subplots array \n", 281 | "fig, axes = plt.subplots(nrows=2, ncols=1)\n", 282 | "axes[0].imshow(im1, cmap='gray', aspect=asp1)\n", 283 | "axes[1].imshow(im2, cmap='gray', aspect=asp2)\n", 284 | "\n", 285 | "plt.show()" 286 | ] 287 | } 288 | ], 289 | "metadata": { 290 | "language_info": { 291 | "name": "python" 292 | } 293 | }, 294 | "nbformat": 4, 295 | "nbformat_minor": 2 296 | } 297 | -------------------------------------------------------------------------------- /Code/ch2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "Intensity\n", 8 | "\n", 9 | "In this chapter, we will work with a hand radiograph from a 2017 Radiological Society of North America competition. X-ray absorption is highest in dense tissue such as bone, so the resulting intensities should be high. Consequently, images like this can be used to predict \"bone age\" in children.\n", 10 | "\n", 11 | "To start, let's load the image and check its intensity range.\n", 12 | "\n", 13 | "The image datatype determines the range of possible intensities: e.g., 8-bit unsigned integers (uint8) can take values in the range of 0 to 255. A colorbar can be helpful for connecting these values to the visualized image.\n", 14 | "\n", 15 | "All exercises in this chapter have the following imports:\n", 16 | "\n", 17 | "import imageio\n", 18 | "import numpy as np\n", 19 | "import matplotlib.pyplot as plt\n" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": {}, 26 | "outputs": [], 27 | "source": [ 28 | "# Load the hand radiograph\n", 29 | "im = plt.imread(\"hand-xray.jpg\")\n", 30 | "print('Data type:', im.dtype)\n", 31 | "print('Min. value:', im.min())\n", 32 | "print('Max value:', im.max())\n", 33 | "\n", 34 | "# Plot the grayscale image\n", 35 | "plt.imshow(im, vmin=0, vmax=255)\n", 36 | "plt.colorbar()\n", 37 | "format_and_render_plot()" 38 | ] 39 | }, 40 | { 41 | "cell_type": "markdown", 42 | "metadata": {}, 43 | "source": [ 44 | "Histograms\n", 45 | "\n", 46 | "Histograms display the distribution of values in your image by binning each element by its intensity then measuring the size of each bin.\n", 47 | "\n", 48 | "The area under a histogram is called the cumulative distribution function. It measures the frequency with which a given range of pixel intensities occurs.\n", 49 | "\n", 50 | "For this exercise, describe the intensity distribution in im by calculating the histogram and cumulative distribution function and displaying them together." 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": null, 56 | "metadata": {}, 57 | "outputs": [], 58 | "source": [ 59 | "# Import SciPy's \"ndimage\" module\n", 60 | "import scipy.ndimage as ndi\n", 61 | "\n", 62 | "# Create a histogram, binned at each possible value\n", 63 | "hist = ndi.histogram(im, min=0, max=255, bins=256)\n", 64 | "\n", 65 | "# Create a cumulative distribution function\n", 66 | "cdf = hist.cumsum() / hist.sum()\n", 67 | "\n", 68 | "# Plot the histogram and CDF\n", 69 | "fig, axes = plt.subplots(2, 1, sharex=True)\n", 70 | "axes[0].plot(hist, label='Histogram')\n", 71 | "axes[1].plot(cdf, label='CDF')\n", 72 | "format_and_render_plot()" 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "metadata": {}, 78 | "source": [ 79 | "Create a mask\n", 80 | "\n", 81 | "Masks are the primary method for removing or selecting specific parts of an image. They are binary arrays that indicate whether a value should be included in an analysis. Typically, masks are created by applying one or more logical operations to an image.\n", 82 | "\n", 83 | "For this exercise, try to use a simple intensity threshold to differentiate between skin and bone in the hand radiograph. (im has been equalized to utilize the whole intensity range.)\n", 84 | "\n", 85 | "Below is the histogram of im colored by the segments we will plot." 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "# Create skin and bone masks\n", 95 | "mask_bone = im >= 145\n", 96 | "mask_skin = (im >= 45) & (im < 145)\n", 97 | "\n", 98 | "# Plot the skin (0) and bone (1) masks\n", 99 | "fig, axes = plt.subplots(1,2)\n", 100 | "axes[0].imshow(mask_skin, cmap='gray')\n", 101 | "axes[1].imshow(mask_bone, cmap='gray')\n", 102 | "format_and_render_plot()" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "metadata": {}, 108 | "source": [ 109 | "Apply a mask\n", 110 | "\n", 111 | "Although masks are binary, they can be applied to images to filter out pixels where the mask is False.\n", 112 | "\n", 113 | "NumPy's where() function is a flexible way of applying masks. It takes three arguments:\n", 114 | "\n", 115 | "np.where(condition, x, y)\n", 116 | "\n", 117 | "condition, x and y can be either arrays or single values. This allows you to pass through original image values while setting masked values to 0.\n", 118 | "\n", 119 | "Let's practice applying masks by selecting the bone-like pixels from the hand x-ray (im)." 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": null, 125 | "metadata": {}, 126 | "outputs": [], 127 | "source": [ 128 | "# Import SciPy's \"ndimage\" module\n", 129 | "from scipy import ndimage as ndi\n", 130 | "\n", 131 | "# Screen out non-bone pixels from \"im\"\n", 132 | "mask_bone = im >= 145\n", 133 | "im_bone = np.where(mask_bone, im , 0)\n", 134 | "\n", 135 | "# Get the histogram of bone intensities\n", 136 | "hist = ndi.histogram(im_bone,min=1, max=255, bins=255 )\n", 137 | "\n", 138 | "# Plot masked image and histogram\n", 139 | "fig, axes = plt.subplots(2,1)\n", 140 | "axes[0].imshow(im_bone)\n", 141 | "axes[1].plot(hist)\n", 142 | "format_and_render_plot()" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "metadata": {}, 148 | "source": [ 149 | "Tune a mask\n", 150 | "\n", 151 | "Imperfect masks can be tuned through the addition and subtraction of pixels. SciPy includes several useful methods for accomplishing these ends. These include:\n", 152 | "\n", 153 | " binary_dilation: Add pixels along edges\n", 154 | " binary_erosion: Remove pixels along edges\n", 155 | " binary_opening: Erode then dilate, \"opening\" areas near edges\n", 156 | " binary_closing: Dilate then erode, \"filling in\" holes\n", 157 | "\n", 158 | "For this exercise, create a bone mask then tune it to include additional pixels.\n", 159 | "\n", 160 | "For the remaining exercises, we have run the following import for you:\n", 161 | "\n", 162 | "import scipy.ndimage as ndi\n" 163 | ] 164 | }, 165 | { 166 | "cell_type": "code", 167 | "execution_count": null, 168 | "metadata": {}, 169 | "outputs": [], 170 | "source": [ 171 | "# Create and tune bone mask\n", 172 | "mask_bone = im >= 145\n", 173 | "mask_dilate = ndi.binary_dilation(mask_bone, iterations=5)\n", 174 | "mask_closed = ndi.binary_closing(mask_bone,iterations=5)\n", 175 | "\n", 176 | "# Plot masked images\n", 177 | "fig, axes = plt.subplots(1,3)\n", 178 | "axes[0].imshow(mask_bone)\n", 179 | "axes[1].imshow(mask_dilate)\n", 180 | "axes[2].imshow(mask_closed)\n", 181 | "format_and_render_plot()" 182 | ] 183 | }, 184 | { 185 | "cell_type": "markdown", 186 | "metadata": {}, 187 | "source": [ 188 | "Filter convolutions\n", 189 | "\n", 190 | "Filters are an essential tool in image processing. They allow you to transform images based on intensity values surrounding a pixel, rather than globally.\n", 191 | "\n", 192 | "2D array convolution. By Michael Plotke [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons\n", 193 | "\n", 194 | "For this exercise, smooth the foot radiograph. First, specify the weights to be used. (These are called \"footprints\" and \"kernels\" as well.) Then, convolve the filter with im and plot the result." 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": null, 200 | "metadata": {}, 201 | "outputs": [], 202 | "source": [ 203 | "# Set filter weights\n", 204 | "weights = [[0.11, 0.11, 0.11],\n", 205 | " [0.11, 0.11, 0.11], \n", 206 | " [0.11, 0.11, 0.11]]\n", 207 | "\n", 208 | "# Convolve the image with the filter\n", 209 | "im_filt = ndi.convolve(im, weights)\n", 210 | "\n", 211 | "# Plot the images\n", 212 | "fig, axes = plt.subplots(1,2)\n", 213 | "axes[0].imshow(im)\n", 214 | "axes[1].imshow(im_filt)\n", 215 | "\n", 216 | "format_and_render_plot()" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "Filter functions\n", 224 | "\n", 225 | "Convolutions rely on a set of weights, but filtering can also be done using functions such as the mean, median and maximum. Just like with convolutions, filter functions will update each pixel value based on its local neighborhood.\n", 226 | "\n", 227 | "Consider the following lines of code:\n", 228 | "\n", 229 | "im = np.array([[93, 36, 87], \n", 230 | " [18, 49, 51],\n", 231 | " [45, 32, 63]])\n", 232 | "\n", 233 | "im_filt = ____\n", 234 | "\n", 235 | "assert im_filt[1,1] == 49\n", 236 | "\n", 237 | "Which of the following statements should go in the blank so that the assert statement evaluates to True?" 238 | ] 239 | }, 240 | { 241 | "cell_type": "markdown", 242 | "metadata": {}, 243 | "source": [ 244 | "Smoothing\n", 245 | "\n", 246 | "Smoothing can improve the signal-to-noise ratio of your image by blurring out small variations in intensity. The Gaussian filter is excellent for this: it is a circular (or spherical) smoothing kernel that weights nearby pixels higher than distant ones.\n", 247 | "\n", 248 | "The width of the distribution is controlled by the sigma argument, with higher values leading to larger smoothing effects.\n", 249 | "\n", 250 | "For this exercise, test the effects of applying Gaussian filters to the foot x-ray before creating a bone mask." 251 | ] 252 | }, 253 | { 254 | "cell_type": "code", 255 | "execution_count": null, 256 | "metadata": {}, 257 | "outputs": [], 258 | "source": [ 259 | "# Smooth \"im\" with Gaussian filters\n", 260 | "im_s1 = ndi.gaussian_filter(im, sigma=1)\n", 261 | "im_s3 = ndi.gaussian_filter(im, sigma=3)\n", 262 | "\n", 263 | "# Draw bone masks of each image\n", 264 | "fig, axes = plt.subplots(1,3)\n", 265 | "axes[0].imshow(im >= 145)\n", 266 | "axes[1].imshow(im_s1>= 145)\n", 267 | "axes[2].imshow(im_s3>= 145)\n", 268 | "format_and_render_plot()" 269 | ] 270 | }, 271 | { 272 | "cell_type": "markdown", 273 | "metadata": {}, 274 | "source": [ 275 | "Detect edges (1)\n", 276 | "\n", 277 | "Filters can also be used as \"detectors.\" If a part of the image fits the weighting pattern, the returned value will be very high (or very low).\n", 278 | "\n", 279 | "In the case of edge detection, that pattern is a change in intensity along a plane. A filter detecting horizontal edges might look like this:\n", 280 | "\n", 281 | "weights = [[+1, +1, +1],\n", 282 | " [ 0, 0, 0],\n", 283 | " [-1, -1, -1]]\n", 284 | "\n", 285 | "For this exercise, create a vertical edge detector and see how well it performs on the hand x-ray (im)." 286 | ] 287 | }, 288 | { 289 | "cell_type": "code", 290 | "execution_count": null, 291 | "metadata": {}, 292 | "outputs": [], 293 | "source": [ 294 | "# Set weights to detect vertical edges\n", 295 | "weights = [[1,0,-1], [1,0,-1], [1,0,-1]]\n", 296 | "\n", 297 | "# Convolve \"im\" with filter weights\n", 298 | "edges = ndi.convolve(im, weights)\n", 299 | "\n", 300 | "# Draw the image in color\n", 301 | "plt.imshow(edges, cmap='seismic', vmin=-150, vmax=150)\n", 302 | "plt.colorbar()\n", 303 | "format_and_render_plot()" 304 | ] 305 | }, 306 | { 307 | "cell_type": "markdown", 308 | "metadata": {}, 309 | "source": [ 310 | "Detect edges (2)\n", 311 | "\n", 312 | "Edge detection can be performed along multiple axes, then combined into a single edge value. For 2D images, the horizontal and vertical \"edge maps\" can be combined using the Pythagorean theorem:\n", 313 | "\n", 314 | "One popular edge detector is the Sobel filter. The Sobel filter provides extra weight to the center pixels of the detector:\n", 315 | "\n", 316 | "weights = [[ 1, 2, 1], \n", 317 | " [ 0, 0, 0],\n", 318 | " [-1, -2, -1]]\n", 319 | "\n", 320 | "For this exercise, improve upon your previous detection effort by merging the results of two Sobel-filtered images into a composite edge map" 321 | ] 322 | }, 323 | { 324 | "cell_type": "code", 325 | "execution_count": null, 326 | "metadata": {}, 327 | "outputs": [], 328 | "source": [ 329 | "# Apply Sobel filter along both axes\n", 330 | "sobel_ax0 = ndi.sobel(im, axis=0)\n", 331 | "sobel_ax1 = ndi.sobel(im, axis=1)\n", 332 | "\n", 333 | "# Calculate edge magnitude \n", 334 | "edges = np.sqrt(np.square(sobel_ax0)+np.square(sobel_ax1))\n", 335 | "\n", 336 | "# Plot edge magnitude\n", 337 | "plt.imshow(edges, cmap='gray', vmax=75)\n", 338 | "format_and_render_plot()" 339 | ] 340 | } 341 | ], 342 | "metadata": { 343 | "language_info": { 344 | "name": "python" 345 | } 346 | }, 347 | "nbformat": 4, 348 | "nbformat_minor": 2 349 | } 350 | -------------------------------------------------------------------------------- /Code/ch3.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "Segment the heart\n", 8 | "\n", 9 | "In this chapter, we'll work with magnetic resonance (MR) imaging data from the Sunnybrook Cardiac Dataset. The full image is a 3D time series spanning a single heartbeat. These data are used by radiologists to measure the ejection fraction: the proportion of blood ejected from the left ventricle during each stroke.\n", 10 | "\n", 11 | "To begin, segment the left ventricle from a single slice of the volume (im). First, you'll filter and mask the image; then you'll label each object with ndi.label().\n", 12 | "\n", 13 | "This chapter's exercises have the following imports:\n", 14 | "\n", 15 | "import imageio\n", 16 | "import numpy as np\n", 17 | "import scipy.ndimage as ndi\n", 18 | "import matplotlib.pyplot as plt\n" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "# Smooth intensity values\n", 28 | "im_filt = ndi.median_filter(im, size=3)\n", 29 | "\n", 30 | "# Select high-intensity pixels\n", 31 | "mask_start = np.where(im_filt > 60, 1, 0)\n", 32 | "mask = ndi.binary_closing(mask_start)\n", 33 | "\n", 34 | "# Label the objects in \"mask\"\n", 35 | "labels, nlabels = ndi.label(mask)\n", 36 | "print('Num. Labels:', nlabels)" 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": {}, 42 | "source": [ 43 | "Select objects\n", 44 | "\n", 45 | "Labels are like object \"handles\" - they give you a way to pick up whole sets of pixels at a time. To select a particular object:\n", 46 | "\n", 47 | " Find the label value associated with the object.\n", 48 | " Create a mask of matching pixels.\n", 49 | "\n", 50 | "For this exercise, create a labeled array from the provided mask. Then, find the label value for the centrally-located left ventricle, and create a mask for it." 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": null, 56 | "metadata": {}, 57 | "outputs": [], 58 | "source": [ 59 | "# Label the image \"mask\"\n", 60 | "labels, nlabels = ndi.label(mask)\n", 61 | "\n", 62 | "# Select left ventricle pixels\n", 63 | "lv_val = labels[128, 128]\n", 64 | "lv_mask = np.where(labels == lv_val, 1, np.nan)\n", 65 | "\n", 66 | "# Overlay selected label\n", 67 | "plt.imshow(lv_mask, cmap='rainbow')\n", 68 | "plt.show()" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "Extract objects\n", 76 | "\n", 77 | "Extracting objects from the original image eliminates unrelated pixels and provides new images that can be analyzed independently.\n", 78 | "\n", 79 | "The key is to crop images so that they only include the object of interest. The range of pixel indices that encompass the object is the bounding box.\n", 80 | "\n", 81 | "For this exercise, use ndi.find_objects() to create a new image containing only the left ventricle." 82 | ] 83 | }, 84 | { 85 | "cell_type": "code", 86 | "execution_count": null, 87 | "metadata": {}, 88 | "outputs": [], 89 | "source": [ 90 | "# Create left ventricle mask\n", 91 | "labels, nlabels = ndi.label(mask)\n", 92 | "\n", 93 | "lv_val = labels[128, 128]\n", 94 | "lv_mask = np.where(labels == lv_val, 1, 0)\n" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "Measure variance\n", 102 | "\n", 103 | "SciPy measurement functions allow you to tailor measurements to specific sets of pixels:\n", 104 | "\n", 105 | " Specifying labels restricts the mask to non-zero pixels.\n", 106 | " Specifying index value(s) returns a measure for each label value.\n", 107 | "\n", 108 | "For this exercise, calculate the intensity variance of vol with respect to different pixel sets. We have provided the 3D segmented image as labels: label 1 is the left ventricle and label 2 is a circular sample of tissue.\n", 109 | "\n", 110 | "Labeled Volume\n", 111 | "\n", 112 | "After printing the variances, select the true statement from the answers below." 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": null, 118 | "metadata": {}, 119 | "outputs": [], 120 | "source": [ 121 | "# Variance for all pixels\n", 122 | "var_all = ndi.variance(vol, labels=None, index=None)\n", 123 | "print('All pixels:', var_all)\n", 124 | "\n", 125 | "# Variance for labeled pixels\n", 126 | "var_labels = ndi.variance(vol, labels=labels, index=None)\n", 127 | "print('Labeled pixels:', var_labels)\n", 128 | "\n", 129 | "# Variance for each object\n", 130 | "var_objects = ndi.variance(vol, labels=labels,index=[1,2])\n", 131 | "print('Left ventricle:', var_objects[0])\n", 132 | "print('Other tissue:', var_objects[1])" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "Separate histograms\n", 140 | "\n", 141 | "A poor tissue segmentation includes multiple tissue types, leading to a wide distribution of intensity values and more variance.\n", 142 | "\n", 143 | "On the other hand, a perfectly segmented left ventricle would contain only blood-related pixels, so the histogram of the segmented values should be roughly bell-shaped.\n", 144 | "\n", 145 | "For this exercise, compare the intensity distributions within vol for the listed sets of pixels. Use ndi.histogram, which also accepts labels and index arguments." 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": null, 151 | "metadata": {}, 152 | "outputs": [], 153 | "source": [ 154 | "# Create histograms for selected pixels\n", 155 | "hist1 = ndi.histogram(vol, min=0, max=255, bins=256)\n", 156 | "hist2 = ndi.histogram(vol, 0, 255, 256, labels=labels)\n", 157 | "hist3 = ndi.histogram(vol, 0, 255, 256, labels=labels, index=1)\n", 158 | "\n", 159 | "# Plot the histogram density\n", 160 | "plt.plot(hist1 / hist1.sum(), label='All pixels')\n", 161 | "plt.plot(hist2 / hist2.sum(), label='All labeled pixels')\n", 162 | "plt.plot(hist3/ hist3.sum(), label='Left ventricle')\n", 163 | "format_and_render_plot()" 164 | ] 165 | }, 166 | { 167 | "cell_type": "markdown", 168 | "metadata": {}, 169 | "source": [ 170 | "Calculate volume\n", 171 | "\n", 172 | "Quantifying tissue morphology, or shape is one primary objective of biomedical imaging. The size, shape, and uniformity of a tissue can reveal essential health insights.\n", 173 | "\n", 174 | "For this exercise, measure the volume of the left ventricle in one 3D image (vol).\n", 175 | "\n", 176 | "First, count the number of voxels in the left ventricle (label value of 1). Then, multiply it by the size of each voxel in mm\n", 177 | ". (Check vol.meta for the sampling rate.)" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": {}, 183 | "source": [ 184 | "Calculate distance\n", 185 | "\n", 186 | "A distance transformation calculates the distance from each pixel to a given point, usually the nearest background pixel. This allows you to determine which points in the object are more interior and which are closer to edges.\n", 187 | "\n", 188 | "For this exercise, use the Euclidian distance transform on the left ventricle object in labels." 189 | ] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "metadata": {}, 195 | "outputs": [], 196 | "source": [ 197 | "# Calculate left ventricle distances\n", 198 | "lv = np.where(____, 1, 0)\n", 199 | "dists = ____\n", 200 | "\n", 201 | "# Report on distances\n", 202 | "print('Max distance (mm):', ____)\n", 203 | "print('Max location:', ____)\n", 204 | "\n", 205 | "# Plot overlay of distances\n", 206 | "overlay = np.where(dists[5] > 0, dists[5], np.nan) \n", 207 | "plt.imshow(overlay, cmap='hot')\n", 208 | "format_and_render_plot()" 209 | ] 210 | }, 211 | { 212 | "cell_type": "markdown", 213 | "metadata": {}, 214 | "source": [ 215 | "Pinpoint center of mass\n", 216 | "\n", 217 | "The distance transformation reveals the most embedded portions of an object. On the other hand, ndi.center_of_mass() returns the coordinates for the center of an object.\n", 218 | "\n", 219 | "The \"mass\" corresponds to intensity values, with higher values pulling the center closer to it.\n", 220 | "\n", 221 | "For this exercise, calculate the center of mass for the two labeled areas. Then, plot them on top of the image." 222 | ] 223 | }, 224 | { 225 | "cell_type": "code", 226 | "execution_count": null, 227 | "metadata": {}, 228 | "outputs": [], 229 | "source": [ 230 | "# Extract centers of mass for objects 1 and 2\n", 231 | "coms = ____\n", 232 | "print('Label 1 center:', ____)\n", 233 | "print('Label 2 center:', ____)\n", 234 | "\n", 235 | "# Add marks to plot\n", 236 | "for c0, c1, c2 in coms:\n", 237 | " plt.scatter(____, ____, s=100, marker='o')\n", 238 | "plt.show()" 239 | ] 240 | }, 241 | { 242 | "cell_type": "markdown", 243 | "metadata": {}, 244 | "source": [ 245 | "Summarize the time series\n", 246 | "\n", 247 | "The ejection fraction is the proportion of blood squeezed out of the left ventricle each heartbeat. To calculate it, radiologists have to identify the maximum volume (systolic volume) and the minimum volume (diastolic volume) of the ventricle.\n", 248 | "\n", 249 | "Slice 4 of Cardiac Timeseries\n", 250 | "\n", 251 | "For this exercise, create a time series of volume calculations. There are 20 time points in both vol_ts and labels. The data is ordered by (time, plane, row, col)." 252 | ] 253 | }, 254 | { 255 | "cell_type": "code", 256 | "execution_count": null, 257 | "metadata": {}, 258 | "outputs": [], 259 | "source": [ 260 | "# Create an empty time series\n", 261 | "ts = ____\n", 262 | "\n", 263 | "# Calculate volume at each voxel\n", 264 | "d0, d1, d2, d3 = ____\n", 265 | "dvoxel = ____\n", 266 | "\n", 267 | "# Loop over the labeled arrays\n", 268 | "for t in range(20):\n", 269 | " nvoxels = ____\n", 270 | " ts[t] = ____\n", 271 | "\n", 272 | "# Plot the data\n", 273 | "____\n", 274 | "format_and_render_plot()" 275 | ] 276 | }, 277 | { 278 | "cell_type": "markdown", 279 | "metadata": {}, 280 | "source": [ 281 | "Measure ejection fraction\n", 282 | "\n", 283 | "The ejection fraction is defined as:\n", 284 | "\n", 285 | "…where\n", 286 | "\n", 287 | "is left ventricle volume for one 3D timepoint.\n", 288 | "\n", 289 | "To close our investigation, plot slices from the maximum and minimum volumes by analyzing the volume time series (ts). Then, calculate the ejection fraction.\n", 290 | "\n", 291 | "After calculating the ejection fraction, review the chart below. Should this patient be concerned?" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": null, 297 | "metadata": {}, 298 | "outputs": [], 299 | "source": [ 300 | "# Get index of max and min volumes\n", 301 | "tmax = np.argmax(ts)\n", 302 | "tmin = np.argmin(ts)\n", 303 | "\n", 304 | "# Plot the largest and smallest volumes\n", 305 | "fig, axes = plt.subplots(2,1)\n", 306 | "axes[0].imshow(vol_ts[tmax, 4], vmax=160)\n", 307 | "axes[1].imshow(vol_ts[tmin, 4], vmax=160)\n", 308 | "format_and_render_plots()\n", 309 | "\n", 310 | "# Calculate ejection fraction\n", 311 | "ej_vol = ts.max() - ts.min()\n", 312 | "ej_frac = ej_vol / ts.max()\n", 313 | "print('Est. ejection volume (mm^3):', ej_vol)\n", 314 | "print('Est. ejection fraction:', ej_frac)" 315 | ] 316 | } 317 | ], 318 | "metadata": { 319 | "language_info": { 320 | "name": "python" 321 | } 322 | }, 323 | "nbformat": 4, 324 | "nbformat_minor": 2 325 | } 326 | -------------------------------------------------------------------------------- /Code/ch4.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "Translations\n", 8 | "\n", 9 | "In this chapter, we'll leverage data use data from the Open Access Series of Imaging Studies to compare the brains of different populations: young and old, male and female, healthy and diseased.\n", 10 | "\n", 11 | "To start, center a single slice of a 3D brain volume (im). First, find the center point in the image array and the center of mass of the brain. Then, translate the image to the center.\n", 12 | "\n", 13 | "This chapter's exercises have all had the following imports:\n", 14 | "\n", 15 | "import imageio\n", 16 | "import numpy as np\n", 17 | "import scipy.ndimage as ndi\n", 18 | "import matplotlib.pyplot as plt\n" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "# Find image center of mass\n", 28 | "com = ndi.center_of_mass(im)\n", 29 | "\n", 30 | "# Calculate amount of shift needed\n", 31 | "d0 = 128 - com[0]\n", 32 | "d1 = 128 - com[1]\n", 33 | "\n", 34 | "# Translate the brain towards the center\n", 35 | "xfm = ndi.shift(im, shift=(d0, d1))\n", 36 | "\n", 37 | "# Plot the original and adjusted images\n", 38 | "fig, axes = plt.subplots(2, 1)\n", 39 | "axes[0].imshow(im)\n", 40 | "axes[1].imshow(xfm)\n", 41 | "format_and_render_plot()" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "metadata": {}, 47 | "source": [ 48 | "Rotations\n", 49 | "\n", 50 | "In cases where an object is angled or flipped, the image can be rotated. Using ndi.rotate(), the image is rotated from its center by the specified degrees from the right horizontal axis.\n", 51 | "\n", 52 | "For this exercise, shift and rotate the brain image (im) so that it is roughly level and \"facing\" the right side of the image." 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "# Shift the image towards the center\n", 62 | "xfm = ____\n", 63 | "\n", 64 | "# Rotate the shifted image\n", 65 | "xfm = ndi.rotate(____, angle=____, reshape=____)\n", 66 | "\n", 67 | "# Plot the original and rotated images\n", 68 | "fig, axes = plt.subplots(2, 1)\n", 69 | "axes[0].imshow(im)\n", 70 | "____\n", 71 | "format_and_render_plot()" 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": {}, 77 | "source": [ 78 | "Affine transform\n", 79 | "\n", 80 | "An affine transformation matrix provides directions for up to four types of changes: translating, rotating, rescaling and shearing. The elements of the matrix map the coordinates from the input array to the output.\n", 81 | "\n", 82 | "Encoded transformations within a matrix\n", 83 | "\n", 84 | "For this exercise, use ndi.affine_transform() to apply the following registration matrices to im. Which one does the best job of centering, leveling and enlarging the original image?" 85 | ] 86 | }, 87 | { 88 | "cell_type": "markdown", 89 | "metadata": {}, 90 | "source": [ 91 | "Resampling\n", 92 | "\n", 93 | "Images can be collected in a variety of shapes and sizes. Resampling is a useful tool when these shapes need to be made consistent. Two common applications are:\n", 94 | "\n", 95 | " Downsampling: combining pixel data to decrease size\n", 96 | " Upsampling: distributing pixel data to increase size\n", 97 | "\n", 98 | "For this exercise, transform and then resample the brain image (im) to see how it affects image shape." 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": null, 104 | "metadata": {}, 105 | "outputs": [], 106 | "source": [ 107 | "# Center and level image\n", 108 | "xfm = ndi.shift(im, shift=[-20,-20])\n", 109 | "xfm = ndi.rotate(xfm, angle=-35, reshape=False)\n", 110 | "\n", 111 | "# Resample image\n", 112 | "im_dn = ndi.zoom(xfm, zoom=0.25)\n", 113 | "im_up = ndi.zoom(xfm, zoom=4)\n", 114 | "\n", 115 | "# Plot the images\n", 116 | "fig, axes = plt.subplots(2, 1)\n", 117 | "axes[0].imshow(im_dn)\n", 118 | "axes[1].imshow(im_up)\n", 119 | "plt.show()\n", 120 | "format_and_render_plot()" 121 | ] 122 | }, 123 | { 124 | "cell_type": "markdown", 125 | "metadata": {}, 126 | "source": [ 127 | "Interpolation\n", 128 | "\n", 129 | "Interpolation is how new pixel intensities are estimated when an image transformation is applied. It is implemented in SciPy using sets of spline functions.\n", 130 | "\n", 131 | "Editing the interpolation order when using a function such as ndi.zoom() modifies the resulting estimate: higher orders provide more flexible estimates but take longer to compute.\n", 132 | "\n", 133 | "For this exercise, upsample im and investigate the effect of different interpolation orders on the resulting image." 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": null, 139 | "metadata": {}, 140 | "outputs": [], 141 | "source": [ 142 | "# Upsample \"im\" by a factor of 4\n", 143 | "up0 = ndi.zoom(im, zoom=4, order=0)\n", 144 | "up5 = ndi.zoom(im, zoom=4, order=5)\n", 145 | "\n", 146 | "# Print original and new shape\n", 147 | "print('Original shape:', im.shape)\n", 148 | "print('Upsampled shape:', up0.shape)\n", 149 | "\n", 150 | "# Plot close-ups of the new images\n", 151 | "fig, axes = plt.subplots(1, 2)\n", 152 | "axes[0].imshow(up0[128:256, 128:256])\n", 153 | "axes[1].imshow(up5[128:256, 128:256])\n", 154 | "format_and_render_plots()" 155 | ] 156 | }, 157 | { 158 | "cell_type": "markdown", 159 | "metadata": {}, 160 | "source": [ 161 | "Mean absolute error\n", 162 | "\n", 163 | "Cost functions and objective functions output a single value that summarizes how well two images match.\n", 164 | "\n", 165 | "The mean absolute error (MAE), for example, summarizes intensity differences between two images, with higher values indicating greater divergence.\n", 166 | "\n", 167 | "For this exercise, calculate the mean absolute error between im1 and im2 step-by-step." 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": null, 173 | "metadata": {}, 174 | "outputs": [], 175 | "source": [ 176 | "# Calculate absolute image difference\n", 177 | "abs_err = np.abs(im1 - im2)\n", 178 | "\n", 179 | "# Plot the difference\n", 180 | "plt.imshow(abs_err, cmap='seismic', vmin=-200, vmax=200)\n", 181 | "format_and_render_plot()\n", 182 | "\n", 183 | "# Calculate mean absolute error\n", 184 | "mean_abs_err = np.mean(np.abs(im1 - im2))\n", 185 | "print('MAE:', mean_abs_err)" 186 | ] 187 | }, 188 | { 189 | "cell_type": "markdown", 190 | "metadata": {}, 191 | "source": [ 192 | "Intersection of the union\n", 193 | "\n", 194 | "Another cost function is the intersection of the union (IOU). The IOU is the number of pixels filled in both images (the intersection) out of the number of pixels filled in either image (the union).\n", 195 | "\n", 196 | "For this exercise, determine how best to transform im1 to maximize the IOU cost function with im2. We have defined the following function for you:\n", 197 | "\n", 198 | "def intersection_of_union(im1, im2):\n", 199 | " i = np.logical_and(im1, im2)\n", 200 | " u = np.logical_or(im1, im2)\n", 201 | " return i.sum() / u.sum()\n", 202 | "\n", 203 | "Note: When using ndi.rotate(), remember to pass reshape=False, so that array shapes match." 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "metadata": {}, 209 | "source": [ 210 | "Identifying potential confounds\n", 211 | "\n", 212 | "Once measures have been extracted, double-check for dependencies within your data. This is especially true if any image parameters (sampling rate, field of view) might differ between subjects, or you pull multiple measures from a single image.\n", 213 | "\n", 214 | "For the final exercises, we have combined demographic and brain volume measures into a pandas DataFrame (df).\n", 215 | "\n", 216 | "First, you will explore the table and available variables. Then, you will check for correlations between the data." 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": null, 222 | "metadata": {}, 223 | "outputs": [], 224 | "source": [ 225 | "# Print random sample of rows\n", 226 | "print(df.sample(3))\n", 227 | "\n", 228 | "# Print prevalence of Alzheimer's Disease\n", 229 | "print(df.alzheimers.value_counts())\n", 230 | "\n", 231 | "# Print a correlation table\n", 232 | "print(df.corr())" 233 | ] 234 | }, 235 | { 236 | "cell_type": "markdown", 237 | "metadata": {}, 238 | "source": [ 239 | "Testing group differences\n", 240 | "\n", 241 | "Let's test the hypothesis that Alzheimer's Disease is characterized by reduced brain volume.\n", 242 | "\n", 243 | "Sample Segmentations of Alzheimer's and Typical Subject\n", 244 | "\n", 245 | "We can perform a two-sample t-test between the brain volumes of elderly adults with and without Alzheimer's Disease. In this case, the two population samples are independent from each other because they are all separate subjects.\n", 246 | "\n", 247 | "For this exercise, use the OASIS dataset (df) and ttest_ind to evaluate the hypothesis." 248 | ] 249 | }, 250 | { 251 | "cell_type": "code", 252 | "execution_count": null, 253 | "metadata": {}, 254 | "outputs": [], 255 | "source": [ 256 | "# Import independent two-sample t-test\n", 257 | "from scipy.stats import ttest_ind\n", 258 | "\n", 259 | "# Select data from \"alzheimers\" and \"typical\" groups\n", 260 | "brain_alz = df.loc[df.alzheimers == True, 'brain_vol']\n", 261 | "brain_typ = df.loc[df.alzheimers == False, 'brain_vol']\n", 262 | "\n", 263 | "# Perform t-test of \"alz\" > \"typ\"\n", 264 | "results = ttest_ind(brain_alz, brain_typ)\n", 265 | "print('t = ', results.statistic)\n", 266 | "print('p = ', results.pvalue)\n", 267 | "\n", 268 | "# Show boxplot of brain_vol differences\n", 269 | "df.boxplot(column='brain_vol', by='alzheimers')\n", 270 | "plt.show()" 271 | ] 272 | }, 273 | { 274 | "cell_type": "markdown", 275 | "metadata": {}, 276 | "source": [ 277 | "Normalizing metrics\n", 278 | "\n", 279 | "We previously saw that there was not a significant difference between the brain volumes of elderly individuals with and without Alzheimer's Disease.\n", 280 | "\n", 281 | "But could a correlated measure, such as \"skull volume\" be masking the differences?\n", 282 | "\n", 283 | "For this exercise, calculate a new test statistic for the comparison of brain volume between groups, after adjusting for the subject's skull size.\n", 284 | "\n", 285 | "Using results.statistic and results.pvalue as your guide, answer the question: Is there strong evidence that Alzheimer's Disease is marked by smaller brain size, relative to skull size?" 286 | ] 287 | }, 288 | { 289 | "cell_type": "code", 290 | "execution_count": null, 291 | "metadata": {}, 292 | "outputs": [], 293 | "source": [ 294 | "# Import independent two-sample t-test\n", 295 | "from scipy.stats import ttest_ind\n", 296 | "\n", 297 | "# Divide `df.brain_vol` by `df.skull_vol`\n", 298 | "df['adj_brain_vol'] = df.brain_vol / df.skull_vol\n", 299 | "\n", 300 | "# Select brain measures by Alzheimers group\n", 301 | "brain_alz = df.loc[df.alzheimers == True, 'adj_brain_vol']\n", 302 | "brain_typ = df.loc[df.alzheimers == False, 'adj_brain_vol']\n", 303 | "\n", 304 | "# Evaluate null hypothesis\n", 305 | "results = ttest_ind(brain_alz, brain_typ)" 306 | ] 307 | } 308 | ], 309 | "metadata": { 310 | "language_info": { 311 | "name": "python" 312 | } 313 | }, 314 | "nbformat": 4, 315 | "nbformat_minor": 2 316 | } 317 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Sondos Ahmad Aabed 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Biomedical-Image-Analysis-in-Python 2 | The field of biomedical imaging has exploded in recent years - but for the uninitiated, even loading data can be a challenge! In this introductory course, I have learned the fundamentals of image analysis using NumPy, SciPy, and Matplotlib. I have navigated through a whole-body CT scan, segment a cardiac MRI time series, and determined whether Alzheimer’s disease changes brain structure. I have finished the course with a solid toolkit for entering this dynamic field. 3 | 4 | ## Statment Of Accomlishment 5 | ![image](https://github.com/sondosaabed/Biomedical-Image-Analysis-in-Python/assets/65151701/9798dfc0-d77e-4022-aba6-ada86072d620) 6 | 7 | ## Course Material 8 | ![image](https://github.com/sondosaabed/Biomedical-Image-Analysis-in-Python/assets/65151701/1c021b48-92dd-411d-a053-b8b95e76a680) 9 | -------------------------------------------------------------------------------- /Theory/chapter1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sondosaabed/Biomedical-Image-Analysis-in-Python/5c147c2212f464200d7cb5eafd65cd7b22e392a8/Theory/chapter1.pdf -------------------------------------------------------------------------------- /Theory/chapter2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sondosaabed/Biomedical-Image-Analysis-in-Python/5c147c2212f464200d7cb5eafd65cd7b22e392a8/Theory/chapter2.pdf -------------------------------------------------------------------------------- /Theory/chapter3.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sondosaabed/Biomedical-Image-Analysis-in-Python/5c147c2212f464200d7cb5eafd65cd7b22e392a8/Theory/chapter3.pdf -------------------------------------------------------------------------------- /Theory/chapter4.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sondosaabed/Biomedical-Image-Analysis-in-Python/5c147c2212f464200d7cb5eafd65cd7b22e392a8/Theory/chapter4.pdf --------------------------------------------------------------------------------