├── raw_data.mat ├── README.md └── SENSE_tutorial.m /raw_data.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mchiew/SENSE-tutorial/HEAD/raw_data.mat -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # SENSE Parallel Imaging Tutorial for MRI 2 | 3 | This MATLAB tutorial gives an introduction to SENSE parallel imaging in MRI. It walks through the estimation of coil sensitivities, combining images from multiple coils, and reconstruction of under-sampled data using the SENSE algorithm. 4 | 5 | For an html rendered version of the tutorial, please click [here](http://htmlpreview.github.io/?https://github.com/mchiew/SENSE-tutorial/blob/main/SENSE_tutorial.html) 6 | -------------------------------------------------------------------------------- /SENSE_tutorial.m: -------------------------------------------------------------------------------- 1 | %% SENSE Parallel Imaging 2 | % Mark Chiew (mark.chiew@utoronto.ca) 3 | % 4 | % This MATLAB tutorial gives an introduction to SENSE parallel imaging in 5 | % MRI. It walks through the estimation of coil sensitivities, combining images 6 | % from multiple coils, and reconstruction of under-sampled data using the SENSE 7 | % algorithm. 8 | % 9 | % The data required for this can be downloaded from: 10 | % 11 | % https://github.com/mchiew/SENSE-tutorial/blob/main/raw_data.mat 12 | % 13 | % Changelog 14 | % 2014: Initial version 15 | % 2022: Migrated to Github 16 | % 2024: Updated email, raw data link 17 | 18 | %% Load data 19 | %% 20 | load('raw_data.mat'); 21 | %% Explore and view sensitivities 22 | % There is a single raw 4D array containing the data we'll be working with, 23 | % named |raw| 24 | 25 | [Nx,Ny,Nz,Nc] = size(raw) 26 | %% 27 | % The dimensions of the dataset are |(Nx, Ny, Nz, Nc)| where |(Nx, Ny, Nz)| 28 | % correspond to the spatial image matrix size, and |Nc| corresponds to the number 29 | % of difference receiver coils ( or coil channels). 30 | % 31 | % In this case, we're working with a |96x96| 2D image |(Nz=1)|, that has 32 | % |16| different coils. 33 | % 34 | % The raw data are provided exactly as measured - which means this is complex 35 | % k-space data. We get one k-space dataset per coil, which we can view the magnitude 36 | % of: 37 | 38 | show_grid(log(abs(raw)), [-2 8], jet) 39 | %% 40 | % How do the k-space data for each coil look similar/different? 41 | % 42 | % The coil in the botton right (coil |#16|), for example, and the one right 43 | % above it (coil |#15|) look very similar in k-space magnitude. Do we expect them 44 | % to look the same in image space? 45 | % 46 | % To look at the images, we need to transfom the k-space data into image 47 | % space, via the inverse discrete Fourier Transform (DFT). The |FFT| (and its 48 | % inverse |iFFT|) are the most efficient ways to compute DFTs. We have defined 49 | % some helper functions to do this properly for us - they just make sure that 50 | % our imaging conventions match up with MATLAB's FFT conventions: 51 | 52 | img = ifft2c(raw); 53 | show_grid(abs(img),[0 16], gray); 54 | %% 55 | % * How do the images look different? 56 | % * What do you think this tells you about the physical coil array? 57 | % * Do you notice anything about the k-space magnitude data that tells you about 58 | % the coil images? 59 | %% Combine images 60 | % Before we talk more about coil sensitivities, lets consider the simple problem 61 | % of combining our multiple coil images into a single representative brain image. 62 | % 63 | % That means we want to define a transform that takes our |[Nx, Ny, 1, Nc]| 64 | % data and maps it to a single |[Nx, Ny, 1, 1]| 2D image. For example, we could 65 | % just take the mean: 66 | 67 | img_combined = mean(img,4); 68 | show_img(abs(img_combined),[]); 69 | %% 70 | % This image is clearly not an accurate representation of what we expect 71 | % this brain slice to look like. What we would like a coil combination to produce 72 | % is an image that does not actually have any coil-specific weighting or influence 73 | % - we would to see what the brain looks like, not what the coils look like. 74 | % 75 | % * What are the problems here, and why do they occur? 76 | % * Can you come up with a better coil-combination transform? 77 | % 78 | % Hint: Try transforms of the form: 79 | % 80 | % $${\left(\sum_{l=1}^{N_c } {\left|z\right|}^n \right)}^{\frac{1}{n}}$$ 81 | 82 | % define and view your own coil-combine transforms 83 | % [Nx, Ny, 1, Nc] -> [Nx, Ny] 84 | % Try linear and non-linear operations! 85 | % e.g.: 86 | img_combined = sqrt(sum(abs(img).^2,4)); 87 | show_img(abs(img_combined),[]); 88 | %% Estimate sensitivities 89 | % We know that we can model the coil images using a continuous representation 90 | % with the following equation: 91 | % 92 | % $C_l \left(x\right)=S_l \left(x\right)\cdot M\left(x\right)$ [1] 93 | % 94 | % where $C_l$ is image from coil $l$, $S_l$ is the _sensitivity_ of coil 95 | % $l$, and $M$is the underlying sample magnetisation. The $\left(\cdot \right)$ 96 | % operator denotes point-wise multiplication here. 97 | % 98 | % We can write this more explicitly in matrix form, after discretisation 99 | % as: 100 | % 101 | % $c=S\;m$ [2] 102 | % 103 | % by vertically concatenating across the coil dimension, to fit everything 104 | % into a single equation. Now $m$ is an $N_x N_y \times 1$ vectorised image, $S$ 105 | % is a vertical concatenation of $N_c$ diagonal matrices such that the diagonal 106 | % entries of the $l^{\mathrm{th}}$ matrix are $N_x N_y \times 1$ values of $S_l$. 107 | % This makes $S$ a $N_x N_y N_c \times N_x N_y$ matrix, which means $c$ has dimensions 108 | % $N_x N_y N_c \times 1$, which is consistent with having $N_c$ images of size 109 | % $N_x N_y$. An explicit example of a case where $N_x =N_y =N_c =2$: 110 | % 111 | % $$\left\lbrack \begin{array}{c}c_{11}^1 \\c_{12}^1 \\c_{21}^1 \\c_{22}^1 112 | % \\c_{11}^2 \\c_{12}^2 \\c_{21}^2 \\c_{22}^2 \end{array}\right\rbrack =\left\lbrack 113 | % \begin{array}{cccc}s_{11}^1 & 0 & 0 & 0\\0 & s_{12}^1 & 0 & 0\\0 & 0 & s_{21}^1 114 | % & 0\\0 & 0 & 0 & s_{22}^1 \\s_{11}^2 & 0 & 0 & 0\\0 & s_{12}^2 & 0 & 0\\0 115 | % & 0 & s_{21}^2 & 0\\0 & 0 & 0 & s_{22}^2 \end{array}\right\rbrack \left\lbrack 116 | % \begin{array}{c}m_{11} \\m_{12} \\m_{21} \\m_{21} \end{array}\right\rbrack$$ 117 | % 118 | % The only thing you have to work with is $c$. What we need to do however, 119 | % is estimate $S$, despite the fact that both $S$ and $m$ are unknown. Unfortunately, 120 | % because the RHS of equation [2] has more unknowns than there are knowns on 121 | % the LHS, this problem has no solution. 122 | % 123 | % Sometimes, an additional image is taken with what is called a _body coil, 124 | % _which can be used to approximate $m$ but the body coil image still carries 125 | % its own bias, and it is not available on all systems, and it requires extra 126 | % time. Instead, we can try and estimate $m$ directly ourselves, from the data 127 | % we already have. 128 | % 129 | % Try using your coil-combined image from the previous section as an approximation 130 | % of $m$. Then solve for $S$ by dividing out $m$ from $c$: 131 | 132 | S_0 = img./img_combined; 133 | %S_0 = bsxfun(@rdivide, img, img_combined); % for older platforms 134 | show_grid(abs(S_0),[0 1],jet); 135 | %% 136 | % You should hopefully see something like smoothly varying images that reflect 137 | % the overall shading you saw in the raw coil images, but without any of the actual 138 | % brain structure or contrast! 139 | % 140 | % [N.B If you don't see this, you should probably go back and try to refine 141 | % your combined coil image before you continue] 142 | % 143 | % It turns out that we have an extra piece of information we can use to solve 144 | % our estimation problem - coil sensitivities must be _smooth_ over space, in 145 | % all directions. This is because coil sensitivities can be related to the static 146 | % magnetic fields produced by the same coils with some applied direct current, 147 | % which you can calculate using the coil geometry and the Biot-Savart law. However, 148 | % it is enough to know that because of this, the coil sensitivities we estimate 149 | % must be smooth, and we can use this constraint to help regularise or solve our 150 | % problem! 151 | % 152 | % Try and refine your sensitivity estimates by enforcing a smoothness constraint: 153 | % 154 | % Hint: The easiest thing to do here is something like convolving your estimate 155 | % with a smoothing kernel. Alternatively, you can try fitting your data to some 156 | % low-order polynomial model, or restricting the spatial frequencies of the input 157 | % data by spatial filtering, or penalising the spatial variation in your estimate 158 | % and solving your own optimisation problem 159 | 160 | % Define your own S_1 which is a smoothed version of S_0 161 | % e.g.: 162 | kernel = ones(9)/9^2; 163 | S_1 = zeros(Nx,Ny,Nz,Nc); 164 | for i = 1:16 165 | S_1(:,:,1,i) = conv2(S_0(:,:,1,i),kernel,'same'); 166 | end 167 | show_grid(abs(S_1),[0 1],jet); 168 | % polynomial fitting 169 | 170 | %% 171 | % The last step here is to mask the sensitivities. Because they won't be 172 | % well defined anywhere outside of the brain (because $m=0$ in those areas), we 173 | % should mask the sensitivities to retain only the well-defined regions. We won't 174 | % lose anything here, because we don't strictly _need_ to know what the sensitivity 175 | % is outside the brain anyway, so we can safely set those regions to 0. 176 | % 177 | % *Hint*: Try and find some threshold or some other classifier that distinguishes 178 | % brain vs. background and define a brain mask 179 | 180 | % Define your own S_2 which is a masked version of S_1 181 | % e.g.: 182 | thresh = 0.05*max(abs(img_combined(:))); 183 | mask = abs(img_combined) > thresh; 184 | S_2 = S_1.*mask; 185 | show_grid(abs(S_2),[0 1],jet); 186 | %% 187 | % For reference, here are sensitivities estimated via a method published 188 | % in (Walsh et al., _Magn Reson Med _2000): 189 | 190 | S_ref = adaptive_est_sens(img); 191 | % Your S_2 on the left, the reference on the right 192 | show_grid(cat(2,abs(S_2),abs(S_ref)),[0,1],jet); 193 | %% Use sensivities 194 | % Now that we have the sensitivities (feel free to use your own or the reference 195 | % sensitivities), we can use them to actually solve the "coil-combine" problem 196 | % in a true least-squares sense, rather than the ad-hoc approach we took above. 197 | % 198 | % Referring back to the linear equation [2], with both $c$ and $S$ known, 199 | % it should be straightforward to come up with the _least-squares optimal_ |[Nx, 200 | % Ny, 1, Nc] to| |[Nx, Ny, 1, 1]| transform to apply to the $N_c$ images in $c$, 201 | % to recover the single image $m$. 202 | % 203 | % *Hint*: It might be easiest to work it out on paper first from Eq. 2, analytically, 204 | % before figuring out how to implement it 205 | 206 | % Define your new coil-combine transform using 207 | % your knowlwedge of S 208 | img_combined_opt = sum(img.*conj(S_2),4)./sum(S_2.*conj(S_2),4); 209 | show_img(abs(img_combined_opt),[0,48],gray); 210 | %% SENSE Parallel Imaging 211 | % Now, the utility of multi-channel array coils really comes from "parallel 212 | % imaging", which is a term used to denote reconstruction of images from under-sampled 213 | % data using coil sensitivity information. The _parallel_ in parallel imaging 214 | % refers to the fact that each coil channel measures it's own version of the image 215 | % in parallel with all the others, so there's no time cost to additional measurements. 216 | % The number of parallel channels is limited by hardware and coil array design 217 | % considerations. 218 | % 219 | % SENSE is one of the earliest formulations of parallel imaging, and stands 220 | % for SENSitivity Encoding (Pruessmann et al., _Magn Reson Med _1999). It formulates 221 | % the problem and its solution in the contexts of the same linear equations and 222 | % least-squares solutions we considered in the previous section. 223 | % 224 | % First, lets consider what happens to our images when we "under-sample" 225 | % our k-space data. To do this, we can zero/mask out the k-space values we want 226 | % to pretend to not have acquired: 227 | 228 | raw_R2 = raw; 229 | raw_R2(2:2:end,:,:,:) = 0; 230 | img_R2 = ifft2c(raw_R2); 231 | show_grid(abs(img_R2),[0 16],gray) 232 | %% 233 | % Collecting only half of the necessary k-space data in this way results 234 | % in _aliasing_ of the brain image, so that we can no longer unambiguously separate 235 | % out the top and bottom portions of the brain. This is unsurprising, because 236 | % we've thrown out half of our measurements! 237 | % 238 | % Lets model our under-sampled imaging problem: 239 | % 240 | % $k=\Phi \;F\;S\;m$ [3] 241 | % 242 | % where $S$ and $m$ are defined as in Eq. 2, $F$is the DFT, and $\Phi$ is 243 | % a diagonal matrix with only |1|s and |0|s on the diagonal, indicating which 244 | % data has been sampled (|1|) or not (|0|). 245 | % 246 | % When all the data is sampled, $\Phi$ is the identity matrix, and so we 247 | % can see Eq. 3 is entirely consistent with Eq. 2: 248 | % 249 | % $c=F^{-1} k=S\;m$ [4] 250 | % 251 | % To solve for $m$ from Eq. 3, in the least-squares sense, we get: 252 | % 253 | % $\hat{m} ={\left.{\left(\left(\Phi \;F\;S\right)\right.}^* \left(\Phi \;F\;S\right)\right)}^{-1} 254 | % {\left(\Phi \;F\;S\right)}^* k={\left(S^* F^{-1} \Phi \;F\;S\right)}^{-1} S^* 255 | % F^{-1} \Phi \;k$ [5] 256 | % 257 | % when you note that $\Phi^* \Phi =\Phi$. However, representing this inverse 258 | % explicitly is very costly, both in terms of memory and computation time. 259 | % 260 | % * How much memory would it cost you to solve Eq. 5 directly? 261 | % 262 | % Luckily, when the matrix $\Phi$ takes on specific forms, the aliasing patterns 263 | % result in many smaller sub-problems that can be solved easily and independently. 264 | % Specifically, when $\Phi$ samples every $R^{\mathrm{th}}$ line of k-space, we 265 | % get regular aliasing (overlap) patterns, so that any given voxel overlaps with 266 | % at most $R-1$ other voxels. This turns a $N_x N_y$-dimensional problem into 267 | % $\frac{N_x N_y }{R}$ $R$-dimensional subproblems. 268 | % 269 | % Notice, for example, in the images plotted above, that within the field-of-view, 270 | % each coil image is the superposition (alias or overlap) of the "true" image, 271 | % plus a copy shifted by $\frac{N_x }{2}$ in the "up-down" or x-direction. So 272 | % we can model each subproblem as: 273 | % 274 | % $\left\lbrack \begin{array}{c}c_{x,y}^1 \\c_{x,y}^2 \\\vdots \\c_{x,y}^{N_c 275 | % } \end{array}\right\rbrack =\left\lbrack \begin{array}{cc}S_{x,y}^1 & S_{x+N_x 276 | % /2,y}^1 \\S_{x,y}^2 & S_{x+N_x /2,y}^2 \\\vdots & \vdots \\S_{x,y}^{N_c } 277 | % & S_{x+N_x /2,y}^{N_c } \end{array}\right\rbrack \left\lbrack \begin{array}{c}m_{x,y} 278 | % \\m_{x+N_x /2,y} \end{array}\right\rbrack$ [6] 279 | % 280 | % So, as we solve the unaliasing problems for the top half of the aliased 281 | % images, we get solutions for "true" top voxel value, as well as the aliased 282 | % voxel from $N_x /2$ below. 283 | % 284 | % To unalias your image, you need to solve Eq. 6. The most common and natural 285 | % way to do this is via least-squares. Implementing a least-squares solver for 286 | % Eq. 6 might look something like this: 287 | 288 | % initialise output image 289 | img_R2_SENSE = zeros(Nx,Ny); 290 | % loop over the top-half of the image 291 | for x = 1:Nx/2 292 | % loop over the entire left-right extent 293 | for y = 1:Ny 294 | % pick out the sub-problem sensitivities 295 | S_R2 = transpose(reshape(S_2([x x+Nx/2],y,1,:),2,[])); 296 | % solve the sub-problem in the least-squares sense 297 | img_R2_SENSE([x x+Nx/2],y) = pinv(S_R2)*reshape(img_R2(x,y,1,:),[],1); 298 | end 299 | end 300 | % plot the result 301 | show_img(abs(img_R2_SENSE),[0 32],gray); 302 | %% 303 | % Hopefully, this looks like a normal brain image, despite the fact that 304 | % half the information in k-space is missing! The SENSE method takes sensitivities 305 | % and aliased coil images as input, and outputs a single unaliased image, which 306 | % is what we get here. 307 | % 308 | % Try solving the |R=3| problem: 309 | 310 | raw_R3 = raw; 311 | raw_R3(2:3:end,:,:,:) = 0; 312 | raw_R3(3:3:end,:,:,:) = 0; 313 | img_R3 = ifft2c(raw_R3); 314 | show_grid(abs(img_R3),[0 16],gray) 315 | %% 316 | % 317 | 318 | % define your solution to the R=3 problem 319 | img_R3_SENSE = SENSE(img_R3,S_2,3); 320 | show_img(abs(img_R3_SENSE),[0 16],gray); 321 | %% 322 | % and the |R=4| subproblem: 323 | 324 | raw_R4 = raw; 325 | raw_R4(2:4:end,:,:,:) = 0; 326 | raw_R4(3:4:end,:,:,:) = 0; 327 | raw_R4(4:4:end,:,:,:) = 0; 328 | img_R4 = ifft2c(raw_R4); 329 | show_grid(abs(img_R4),[0 16],gray) 330 | % define your solution to the R=4 problem 331 | img_R4_SENSE = SENSE(img_R4,S_2,4); 332 | show_img(abs(img_R4_SENSE),[0 16],gray); 333 | %% 334 | % Try performing the reconstruction using your less refined sensitivity 335 | % estimates |S_0| and |S_1|, and the reference sensitivities |S_ref|. How do the 336 | % reconstructions differ? 337 | 338 | % R=4 SENSE reconstruction with S_0, S_1, and S_ref 339 | img_S0 = SENSE(img_R4,S_0,4); 340 | img_S1 = SENSE(img_R4,S_1,4); 341 | img_SREF = SENSE(img_R4,S_ref,4); 342 | show_grid(abs(cat(4,img_S0,img_S1,img_R4_SENSE,img_SREF)),[0 16],gray) 343 | %% 344 | % Although the reconstruction using |S_0| looks best here, in reality the 345 | % |S_0| sensitivities are over-fit to this data, and represents an unrealistic 346 | % case since we typically wouldn't derive the sensitivities from the exact same 347 | % dataset we apply them to, like we're doing here for demonstration purposes. 348 | % Now try performing the same set of reconstructions, but with the source image 349 | % shifted slightly in space to see the effect of over-fitting: 350 | 351 | % R=4 SENSE reconstruction with S_0, S_1, and S_ref in the presence of 352 | % slight image translation 353 | img_R4_moved = circshift(circshift(img_R4,1,2),1,1); 354 | img_S0b = SENSE(img_R4_moved,S_0,4); 355 | img_S1b = SENSE(img_R4_moved,S_1,4); 356 | img_S2b = SENSE(img_R4_moved,S_2,4); 357 | img_SREFb = SENSE(img_R4_moved,S_ref,4); 358 | show_grid(abs(cat(4,img_S0b,img_S1b,img_S2b,img_SREFb)),[0 16],gray) 359 | %% 360 | % * What do you observe? 361 | %% The Limits of Parallel Imaging 362 | % Now that we have our SENSE implementation working, lets consider the limits 363 | % of parallel imaging. 364 | % 365 | % Try pushing the under-sampling factor to |R=6:| 366 | 367 | % R=6 SENSE reconstruction 368 | raw_R6 = raw; 369 | idx_R6 = setdiff(1:Nx,1:6:Nx); 370 | raw_R6(idx_R6,:,:,:) = 0; 371 | img_R6 = ifft2c(raw_R6); 372 | img_R6_SENSE = SENSE(img_R6,S_2,6); 373 | show_img(abs(img_R6_SENSE),[0 16],gray); 374 | %% 375 | % Or |R=8:| 376 | 377 | % R=8 SENSE reconstruction 378 | raw_R8 = raw; 379 | idx_R8 = setdiff(1:Nx,1:8:Nx); 380 | raw_R8(idx_R8,:,:,:) = 0; 381 | img_R8 = ifft2c(raw_R8); 382 | img_R8_SENSE = SENSE(img_R8,S_2,8); 383 | show_img(abs(img_R8_SENSE),[0 16],gray); 384 | %% 385 | % Along the same lines, what happens if you select a subset of the coil 386 | % information? Try a reconstruction at |R=4| using only |Nc=8| of the coil sensitivities 387 | % and corresponding images: 388 | 389 | % Nc=8 SENSE reconstruction @ R=4 390 | img_R4_Nc8 = SENSE(img_R4(:,:,:,1:8),S_2(:,:,:,1:8),4); 391 | show_img(abs(img_R4_Nc8),[0 16],gray); 392 | %% 393 | % Or |Nc=4:| 394 | 395 | % Nc=4 SENSE reconstruction @ R=4 396 | img_R4_Nc4 = SENSE(img_R4(:,:,:,1:4),S_2(:,:,:,1:4),4); 397 | show_img(abs(img_R4_Nc4),[0 16],gray); 398 | %% 399 | % * What happens to the reconstructions as |Nc| is close to or equal to |R? 400 | % | 401 | % * How might you select an _optimal_ subset of |Nc=4| coils? What makes it 402 | % _optimal_? 403 | % * What happens if you try to perform an |R=4| reconstruction with |Nc0.1*max(abs(M(:)))); 531 | end 532 | function out = fft2c(input) 533 | out = fftshift(fft(ifftshift(input,1),[],1),1); 534 | out = fftshift(fft(ifftshift(out,2),[],2),2); 535 | end 536 | function out = ifft2c(input) 537 | out = fftshift(ifft(ifftshift(input,1),[],1),1); 538 | out = fftshift(ifft(ifftshift(out,2),[],2),2); 539 | end 540 | --------------------------------------------------------------------------------