├── GRAPPA.pdf ├── README.md ├── data └── data.mat ├── examples.m ├── grappa-practical.html ├── grappa.m ├── grappa_apply_weights.m ├── grappa_estimate_weights.m ├── grappa_get_indices.m ├── grappa_get_pad_size.m ├── grappa_pad_data.m ├── grappa_unpad_data.m ├── show_quad.m └── solution ├── grappa_apply_weights.m ├── grappa_estimate_weights.m ├── grappa_get_indices.m ├── grappa_get_pad_size.m ├── grappa_pad_data.m └── grappa_unpad_data.m /GRAPPA.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mchiew/grappa-tutorial/a38145383fb476e06191ccdc0e36279fa12d4d46/GRAPPA.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # GRAPPA Parallel Imaging Tutorial 2 | Mark Chiew (mchiew@fmrib.ox.ac.uk) 3 | 4 | ## Introduction 5 | Parallel imaging, broadly speaking, refers to the use of multi-channel receive coil information to reconstruct under-sampled data. 6 | It is called "parallel" because the set of receive channels all acquire data "in parallel" - i.e., they all record at the same time. 7 | GRAPPA (Griswold et al., MRM 2002) is one of the most popular techniques for performing parallel imaging reconstruction, among many. 8 | Others include SENSE (Pruessmann et al., MRM 1999) and ESPIRiT (Uecker et al., MRM 2014), 9 | 10 | So, in this tutorial, we'll go over the basics of _what_ GRAPPA does, and _how_ to do it. 11 | What we won't cover is _why_ GRAPPA, or parallel imaging works. I refer you to one of the many review papers on Parallel Imaging. 12 | 13 | ## The GRAPPA Problem 14 | ### Problem Definition 15 | First, this is an example of the type of problem we are trying to solve: 16 | 17 | Coil #1 k-space Coil #2 k-space 18 | 19 | oooooooooooooooo oooooooooooooooo 20 | ---------------- ---------------- 21 | oooooooooooooooo oooooooooooooooo o : Acquired k-space data point 22 | ---------------- ---------------- - : Missing k-space data 23 | oooooooooooooooo oooooooooooooooo 24 | ---------------- ---------------- 25 | oooooooooooooooo oooooooooooooooo 26 | ---------------- ---------------- 27 | oooooooooooooooo oooooooooooooooo 28 | ---------------- ---------------- 29 | oooooooooooooooo oooooooooooooooo 30 | ---------------- ---------------- 31 | 32 | The acquisition above is missing every other line `(R=2)`. To generate our output images, we will need to fill in these missing lines. 33 | GRAPPA gives us a set of steps to fill in these missing lines. We do this by using a weighted combination of surrounding points, from all coils. 34 | 35 | ### Overview 36 | Here's what this practical will help you with: 37 | 38 | 1. Understanding and defining GRAPPA kernel geometries 39 | 2. Learning how to construct the GRAPPA synthesis problem as a linear system 40 | 3. Learning how to estimate the kernel weights needed to perform GRAPPA reconstruction 41 | 42 | ### Step-by-Step 43 | Let's consider one of the missing points we want to reconstruct, from coil 1, denoted `X`: 44 | 45 | Coil #1 k-space Coil #2 k-space 46 | 47 | oooooooooooooooo oooooooooooooooo 48 | ---------------- ---------------- 49 | oooooooooooooooo oooooooooooooooo o : Acquired k-space data point 50 | ---------------- ---------------- - : Missing k-space data 51 | oooooooooooooooo oooooooooooooooo 52 | --------X------- ---------------- X : Reconstruction Target 53 | oooooooooooooooo oooooooooooooooo 54 | ---------------- ---------------- 55 | oooooooooooooooo oooooooooooooooo 56 | ---------------- ---------------- 57 | oooooooooooooooo oooooooooooooooo 58 | ---------------- ---------------- 59 | 60 | To recover the target point `X`, we need to choose a local neighbourhood of surrounding acquired points, as well as the "parallel" neighbourhoods of the other coils. 61 | 62 | Coil #1 k-space Coil #2 k-space 63 | 64 | oooooooooooooooo oooooooooooooooo 65 | ---------------- ---------------- 66 | oooooooooooooooo oooooooooooooooo o : Acquired k-space data point 67 | ---------------- ---------------- - : Missing k-space data 68 | ooooooo***oooooo ooooooo***oooooo 69 | --------X------- --------Y------- X : Reconstruction target 70 | ooooooo***oooooo ooooooo***oooooo Y : Reconstruction target position in other coils 71 | ---------------- ---------------- 72 | oooooooooooooooo oooooooooooooooo * : Neighbourhood sources 73 | ---------------- ---------------- 74 | oooooooooooooooo oooooooooooooooo 75 | ---------------- ---------------- 76 | 77 | We chose a neighbourhood, or "kernel" of 3 points in the x-direction, and 2 points in the y-direction, centred on the target position. 78 | These source points come from the same coil as the target point, as well as in the same locations from the other parallel coils. 79 | Zooming in on the target point, and its sources: 80 | 81 | Coil #1 k-space Coil #2 k-space 82 | 83 | * * * * * * 84 | X : Reconstruction target 85 | - X - - Y - Y : Reconstruction target position in other coils 86 | 87 | * * * * * * * : Neighbourhood sources 88 | 89 | We can see that there are 3x2=6 source points per coil. This is generally referred to as a 3x2 kernel geometry. 90 | If we label all the source points in this 2-coil example: 91 | 92 | Coil #1 k-space Coil #2 k-space 93 | 94 | a b c g h i 95 | X : Reconstruction target 96 | - X - - Y - Y : Reconstruction target position in other coils 97 | 98 | d e f j k l [a-l] : Source points 99 | 100 | The GRAPPA weighted combination formulation means that: 101 | 102 | X = wx_a*a + wx_b*b + wx_c*c + ... + wx_l*l; {Eq. 1a} 103 | 104 | where `wx_n` refers to the weight for source n, and `a-l` are the complex k-space data points. 105 | While the kernels are shift-invariant over the entire k-space, they are specific to the target coils. 106 | 107 | Therefore, to reconstruct target point Y in the second coil, a different set of weights are required: 108 | 109 | Y = wy_a*a + wy_b*b + wy_c*c + ... + wy_l*l; {Eq. 1b} 110 | 111 | So, for example, if you have 8 coils, using 3x2 kernel geometries, you will have 8x6 different kernel 112 | weights, where the entire group of weights for all coils is called the kernel or weight set. 113 | 114 | If we write this as a matrix equation, we can see that: 115 | 116 | M = W*S; {Eq. 2a} 117 | or 118 | [X] [wx_a wx_b wx_c ... wx_l] [a] 119 | [Y] [wy_a wy_b wy_c ... wy_l] [b] 120 | . = [ . ] [c] {Eq. 2b} 121 | . [ . ] [d] 122 | [Z] [wz_a wz_b wz_c ... wz_l] [e] 123 | [.] 124 | [.] 125 | [.] 126 | [l] 127 | 128 | where `M=(X,Y,...Z)'` are the missing points from each coil, `W` is the kernel weight matrix where each coil's weights comprise one row. 129 | `S` is a vector of source points - order is not important, but it is critical to ensure it is consistent with the weights. 130 | 131 | Finally, this expression only holds for a fixed kernel geometry (i.e. source - target geometry). For `R > 2` (i.e. `R-1` missing lines for each measured line), 132 | there will be `R-1` distinct kernel sets. 133 | 134 | For example, in the case of `R=3`, you have 2 distinct kernel geometries to work with: 135 | 136 | Coil #1 k-space Coil #2 k-space 137 | 138 | oooooooooooooooo oooooooooooooooo 139 | ---------------- ---------------- 140 | ---------------- ---------------- o : Acquired k-space data point 141 | oooooooooooooooo oooooooooooooooo - : Missing k-space data 142 | ---------------- ---------------- 143 | ---------------- ---------------- --- ooo 144 | oooooooooooooooo oooooooooooooooo ooo First kernel --- Second kernel 145 | -------X-------- ---------------- -X- geometry -Y- geometry 146 | -------Y-------- ---------------- --- ooo 147 | oooooooooooooooo oooooooooooooooo ooo --- 148 | ---------------- ---------------- 149 | ---------------- ---------------- 150 | 151 | If you want to generalise this to `R > 2`, ultimately, for `C` coils, with kernel size `[Nx,Ny]`, and acceleration factor `R`, in totality, 152 | you will need to estimate `C*(R-1)*C*Nx*Ny` weight coefficients. You can solve this as a single comprehensive system, or because the problems are uncoupled, 153 | you may find it easier to solve each `(R-1)` class of sub-problems separately. 154 | 155 | So Eq. 2 completely describes how you solve for missing points, given acquired data in some neighbourhood around it. 156 | 157 | The one final piece of information we need is fully sampled "calibration" or "training" data, so that we can actually find the kernel weights. 158 | To do this, we simply solve Eq. 2 for the weights, typically in a least-squares sense (no pun intended), over the calibration data. 159 | In this case, M and S are both matrices, containing known information, representing source-target relationships across the entire calibration region. 160 | 161 | This is what "fitting the kernel" refers to: 162 | 163 | W = M*pinv(S); {Eq. 3a} 164 | or 165 | W = M*S'*(S*S')^-1; {Eq. 3a} 166 | 167 | To ensure a robust and well-conditioned fit, typically a relatively large calibration region is used fit `W`. 168 | Over the calibration data, nearly every point is a potential source and target. Because all the points are present, 169 | we can use this data to learn the shift-invariant geometric relationships in the k-space x coil data. 170 | 171 | Coil #1 k-space Coil #2 k-space 172 | 173 | oooooooooooooooo oooooooooooooooo 174 | oooooooooooooooo oooooooooooooooo 175 | oooooooooooooooo oooooooooooooooo o : Acquired k-space calibration data 176 | oooooooooooooooo oooooooooooooooo 177 | oooooooooooooooo oooooooooooooooo 178 | oooooooooooooooo oooooooooooooooo 179 | oooooooooooooooo oooooooooooooooo 180 | oooooooooooooooo oooooooooooooooo 181 | oooooooooooooooo oooooooooooooooo 182 | oooooooooooooooo oooooooooooooooo 183 | oooooooooooooooo oooooooooooooooo 184 | oooooooooooooooo oooooooooooooooo 185 | 186 | So to solve Eq. 3, we "move" the kernel over the entire calibration space, and for every source-target pairing, we get an additional 187 | column in `M` and `S`: 188 | 189 | Coil #1 k-space Coil #2 k-space 190 | 191 | a1 b1 c1 - g1 h1 i1 - [X1] [wx_a wx_b wx_c ... wx_l] [a1] 192 | [Y1] [wy_a wy_b wy_c ... wy_l] [b1] 193 | - X1 - - - Y1 - - [ .] = [ . ] [c1] 194 | [ .] [ . ] [d1] 195 | d1 e1 f1 - j1 k1 l1 - [Z1] [wz_a wz_b wz_c ... wz_l] [e1] 196 | [ .] 197 | - - - - - - - - [ .] 198 | [l1] 199 | 200 | Coil #1 k-space Coil #2 k-space 201 | 202 | - a2 b2 c2 - g2 h2 i2 [X1 X2] [wx_a wx_b wx_c ... wx_l] [a1 a2] 203 | [Y1 Y2] [wy_a wy_b wy_c ... wy_l] [b1 b2] 204 | - - X2 - - - Y2 - [ . .] = [ . ] [c1 c2] 205 | [ . .] [ . ] [d1 d2] 206 | - d2 e2 f2 - j2 k2 l2 [Z1 Z2] [wz_a wz_b wz_c ... wz_l] [e1 e2] 207 | [ . .] 208 | - - - - - - - - [ . .] 209 | [l1 l2] 210 | 211 | Coil #1 k-space Coil #2 k-space 212 | 213 | - - - - - - - - [X1 X2 ... Xn] [wx_a wx_b wx_c ... wx_l] [a1 a2 ... an] 214 | [Y1 Y2 ... Yn] [wy_a wy_b wy_c ... wy_l] [b1 b2 ... bn] 215 | - an bn cn - gn hn in [ . . ... .] = [ . ] [c1 c2 ... cn] 216 | [ . . ... .] [ . ] [d1 d2 ... dn] 217 | - - Xn - - - Yn - [Z1 Z2 ... Zn] [wz_a wz_b wz_c ... wz_l] [e1 e2 ... en] 218 | [ . . ... .] 219 | - dn en fn - jn kn ln [ . . ... .] 220 | [l1 l2 ... ln] 221 | 222 | Now that you have `M` and `S` fully populated, you can solve Eq. 3 in the least squares sense, by pseudo-inverting `S`: 223 | 224 | [wx_a wx_b wx_c ... wx_l] [X1 X2 ... Xn] [a1 a2 ... an] 225 | [wy_a wy_b wy_c ... wy_l] [Y1 Y2 ... Yn] [b1 b2 ... bn] 226 | [ . ] = [ . . ... .] *pinv([c1 c2 ... cn]) Eq. {4} 227 | [ . ] [ . . ... .] [d1 d2 ... dn] 228 | [wz_a wz_b wz_c ... wz_l] [Z1 Z2 ... Zn] [e1 e2 ... en] 229 | [ . . ... .] 230 | [ . . ... .] 231 | [l1 l2 ... ln] 232 | ### Summary 233 | Ultimately, the entire GRAPPA algorithm boils down to: 234 | 235 | 1. Choosing desired kernel geometries 236 | 2. Solving Eq. 3 to fit for `W` over the calibration data 237 | 3. Applying Eq. 2 to solve for `M`, using the calibrated `W`, over the actual under-sampled data 238 | 239 | ## Practical 240 | Now that you have a basic sense of the internal logic behind GRAPPA (the _what_), we'll get into a step-by-step practical on 241 | how to actually go through the mechanics of writing a GRAPPA-based image reconstruction program (the _how_). 242 | 243 | I've included skeleton code for each step, that you're free to use if you like. I've also included a full working step-by-step solution if 244 | you get stuck on any step. Use as much or as little of the provided solution as you like. 245 | 246 | At the end of the tutorial, I'll just briefly walk through my solution code. 247 | 248 | ### Step 1 - Organising the reconstruction code 249 | Source file: `grappa.m` 250 | 251 | We'll use `grappa.m` as our main function that takes in undersampled data and returns reconstructed data. 252 | We'll also be defining separate functions for most of the other steps, and the provided `grappa.m` file is already organised this way for you. 253 | I recommend you use the provided `grappa.m` file. 254 | 255 | ### Step 2 - Pad data to deal with kernels applied at k-space boundaries 256 | Source files: `grappa_get_pad.m`, `grappa_pad_data.m`, `grappa_unpad_data.m` 257 | 258 | We need to pad the k-space data that we're working with in order to accommodate kernels being applied at the boundary of the actual data. 259 | Because the kernels extend for some width beyond the target point, if the reconstruction target is at the edge, the kernel will necessarily 260 | need to grab data from beyond. 261 | 262 | This is organised into 2 parts: 263 | `grappa_get_pad.m` should return you the size of padding needed in each dimension given your kernel size and under-sampling factor 264 | `grappa_pad_data.m` should perform the padding operation 265 | `grappa_unpad_data.m` should perform the un-padding operation 266 | 267 | ### Step 3 - Compute relative indices for source points relative to target 268 | Source file: `grappa_get_indices.m` 269 | 270 | This is in my opinion the trickiest part of the practical GRAPPA problem. In order to perform weight estimation and application, you 271 | will need to be able to know what the co-ordinates are of every source point relative to its target point. It's not difficult to picture 272 | in your head, but making sure you've got your indexing correct is pretty crucial. 273 | 274 | You will need to return an array of source indices, where each column is paired with the corresponding element in the target index vector. 275 | I use linear indexing for this (i.e., I index the 3D data arrays from 1 to C*Nx*Ny linearly, instead of using subscripts). 276 | 277 | I *strongly* recommend simply using the provided solution, unless you're feeling particularly keen. In my solution, for simplicity, I require that 278 | the kernel size be odd in the kx-direction, and even in the ky-direction. 279 | 280 | ### Step 4 - Perform weight estimation 281 | Source file: `grappa_estimate_weights.m` 282 | 283 | Given paired set of source and target indices, you must now use those to collect corresponding dataset pairs from the calibration k-space 284 | in order to perform weight estimation. Don't forget that the weights must map data from _all_ coils to the target points in _each_ coil. 285 | 286 | You can do whatever you like here, if you know what you're doing. Otherwise, I recommend you perform a least squares fit. 287 | The easiest way to do that, is to recognise that this is simply a linear regression problem as laid out above, and use the pseudoinverse (pinv in MATLAB) 288 | Eq. {4} is basically what you should get. 289 | 290 | The way I have structured my solution is to estimate weights for each of the R-1 missing line groups (or kernel geometries) separately. 291 | This makes the organisation a bit simpler, and is mathematically equivalent to solving for all weight sets at once. Feel free to try to implement 292 | the all-in-one approach if you have extra time. 293 | 294 | ### Step 5 - Apply weights to reconstruct missing data 295 | Source file: `grappa_apply_weights.m` 296 | 297 | Finally, the weights estimated need to be used to reconstruct missing data. Here you also need to identify source and target indices from the 298 | actual under-sampled data, and then use the weights you derive to solve the reconstruction problem (Eq. {2}). 299 | 300 | Again, depending on whether you separated the weight estimation into R-1 subproblems or one big problem, you will need to either loop over 301 | all your subproblems to get the final reconstruction, or simply apply your weights to all missing points at once. 302 | 303 | ### Step 6 - Test Reconstructions 304 | Source file: `example.m` 305 | 306 | To evaluate your reconstruction, I have provided a simple script and some data that perform some toy under-sampling problems. 307 | If you've implemented `grappa.m` and everything else correctly, this should run and give you the expected outputs. 308 | 309 | ### Bonus Steps 310 | Using this exact framework, it is relatively simple to perform the following extensions: 311 | (I have code for these, they're all in some form or another on psg.fmrib.ox.ac.uk/u/mchiew/projects) 312 | 313 | * Regularised GRAPPA (Tikhonov, PRUNO/ESPIRiT-style SVD-truncation) 314 | * SENSE-style image-based reconstruction using "sensitivities" derived from Fourier Transforming the GRAPPA kernel 315 | * Analytical g-factor maps derived from the GRAPPA kernel 316 | * 2D GRAPPA (under-sampling in both directions, with and without CAIPI-style sampling) 317 | * Slice-GRAPPA and Split-Slice-GRAPPA multi-band slice separation 318 | -------------------------------------------------------------------------------- /data/data.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mchiew/grappa-tutorial/a38145383fb476e06191ccdc0e36279fa12d4d46/data/data.mat -------------------------------------------------------------------------------- /examples.m: -------------------------------------------------------------------------------- 1 | % examples.m 2 | % mchiew@fmrib.ox.ac.uk 3 | 4 | % Load example data 5 | input = matfile('data/data.mat'); 6 | truth = input.truth; 7 | calib = input.calib; 8 | 9 | % ============================================================================ 10 | % The R=2 problem 11 | % ============================================================================ 12 | 13 | R = [1,2]; 14 | kernel = [3,4]; 15 | 16 | mask = false(32,96,96); 17 | mask(:,:,1:2:end) = true; 18 | 19 | data = truth.*mask; 20 | 21 | recon = grappa(data, calib, R, kernel); 22 | 23 | show_quad(data, recon, 'R=2'); 24 | 25 | 26 | % ============================================================================ 27 | % The R=3 problem 28 | % ============================================================================ 29 | 30 | R = [1,3]; 31 | kernel = [3,4]; 32 | 33 | mask = false(32,96,96); 34 | mask(:,:,1:3:end) = true; 35 | 36 | data = truth.*mask; 37 | 38 | recon = grappa(data, calib, R, kernel); 39 | 40 | show_quad(data, recon, 'R=3'); 41 | 42 | 43 | % ============================================================================ 44 | % The R=6 problem 45 | % ============================================================================ 46 | 47 | R = [1,6]; 48 | kernel = [3,2]; 49 | 50 | mask = false(32,96,96); 51 | mask(:,:,1:6:end) = true; 52 | 53 | data = truth.*mask; 54 | 55 | recon = grappa(data, calib, R, kernel); 56 | 57 | show_quad(data, recon, 'R=6'); 58 | 59 | 60 | % ============================================================================ 61 | % The noisy R=6 problem 62 | % ============================================================================ 63 | 64 | R = [1,6]; 65 | kernel = [3,2]; 66 | 67 | mask = false(32,96,96); 68 | mask(:,:,1:6:end) = true; 69 | 70 | noise = 1E-6*(randn(size(mask)) + 1j*randn(size(mask))); 71 | data = (truth + noise).*mask; 72 | 73 | recon = grappa(data, calib, R, kernel); 74 | 75 | show_quad(data, recon, 'R=6 with noise'); 76 | -------------------------------------------------------------------------------- /grappa-practical.html: -------------------------------------------------------------------------------- 1 |
Mark Chiew (mchiew@fmrib.ox.ac.uk)
4 | 5 |Parallel imaging, broadly speaking, refers to the use of multi-channel receive coil information to reconstruct under-sampled data. 8 | It is called "parallel" because the set of receive channels all acquire data "in parallel" - i.e., they all record at the same time. 9 | GRAPPA (Griswold et al., MRM 2002) is one of the most popular techniques for performing parallel imaging reconstruction, among many. 10 | Others include SENSE (Pruessmann et al., MRM 1999) and ESPIRiT (Uecker et al., MRM 2014),
11 | 12 |So, in this tutorial, we'll go over the basics of what GRAPPA does, and how to do it. 13 | What we won't cover is why GRAPPA, or parallel imaging works. I refer you to one of the many review papers on Parallel Imaging.
14 | 15 |First, this is an example of the type of problem we are trying to solve:
20 | 21 |Coil #1 k-space Coil #2 k-space
22 |
23 | oooooooooooooooo oooooooooooooooo
24 | ---------------- ----------------
25 | oooooooooooooooo oooooooooooooooo o : Acquired k-space data point
26 | ---------------- ---------------- - : Missing k-space data
27 | oooooooooooooooo oooooooooooooooo
28 | ---------------- ----------------
29 | oooooooooooooooo oooooooooooooooo
30 | ---------------- ----------------
31 | oooooooooooooooo oooooooooooooooo
32 | ---------------- ----------------
33 | oooooooooooooooo oooooooooooooooo
34 | ---------------- ----------------
35 |
36 |
37 | The acquisition above is missing every other line (R=2)
. To generate our output images, we will need to fill in these missing lines.
38 | GRAPPA gives us a set of steps to fill in these missing lines. We do this by using a weighted combination of surrounding points, from all coils.
Here's what this practical will help you with:
43 | 44 |Let's consider one of the missing points we want to reconstruct, from coil 1, denoted X
:
Coil #1 k-space Coil #2 k-space
57 |
58 | oooooooooooooooo oooooooooooooooo
59 | ---------------- ----------------
60 | oooooooooooooooo oooooooooooooooo o : Acquired k-space data point
61 | ---------------- ---------------- - : Missing k-space data
62 | oooooooooooooooo oooooooooooooooo
63 | --------X------- ---------------- X : Reconstruction Target
64 | oooooooooooooooo oooooooooooooooo
65 | ---------------- ----------------
66 | oooooooooooooooo oooooooooooooooo
67 | ---------------- ----------------
68 | oooooooooooooooo oooooooooooooooo
69 | ---------------- ----------------
70 |
71 |
72 | To recover the target point X
, we need to choose a local neighbourhood of surrounding acquired points, as well as the "parallel" neighbourhoods of the other coils.
Coil #1 k-space Coil #2 k-space
75 |
76 | oooooooooooooooo oooooooooooooooo
77 | ---------------- ----------------
78 | oooooooooooooooo oooooooooooooooo o : Acquired k-space data point
79 | ---------------- ---------------- - : Missing k-space data
80 | ooooooo***oooooo ooooooo***oooooo
81 | --------X------- --------Y------- X : Reconstruction target
82 | ooooooo***oooooo ooooooo***oooooo Y : Reconstruction target position in other coils
83 | ---------------- ----------------
84 | oooooooooooooooo oooooooooooooooo * : Neighbourhood sources
85 | ---------------- ----------------
86 | oooooooooooooooo oooooooooooooooo
87 | ---------------- ----------------
88 |
89 |
90 | We chose a neighbourhood, or "kernel" of 3 points in the x-direction, and 2 points in the y-direction, centred on the target position. 91 | These source points come from the same coil as the target point, as well as in the same locations from the other parallel coils. 92 | Zooming in on the target point, and its sources:
93 | 94 |Coil #1 k-space Coil #2 k-space
95 |
96 | * * * * * *
97 | X : Reconstruction target
98 | - X - - Y - Y : Reconstruction target position in other coils
99 |
100 | * * * * * * * : Neighbourhood sources
101 |
102 |
103 | We can see that there are 3x2=6 source points per coil. This is generally referred to as a 3x2 kernel geometry. 104 | If we label all the source points in this 2-coil example:
105 | 106 |Coil #1 k-space Coil #2 k-space
107 |
108 | a b c g h i
109 | X : Reconstruction target
110 | - X - - Y - Y : Reconstruction target position in other coils
111 |
112 | d e f j k l [a-l] : Source points
113 |
114 |
115 | The GRAPPA weighted combination formulation means that:
116 | 117 |X = wx_a*a + wx_b*b + wx_c*c + ... + wx_l*l; {Eq. 1a}
118 |
119 |
120 | where wx_n
refers to the weight for source n, and a-l
are the complex k-space data points.
121 | While the kernels are shift-invariant over the entire k-space, they are specific to the target coils.
Therefore, to reconstruct target point Y in the second coil, a different set of weights are required:
124 | 125 |Y = wy_a*a + wy_b*b + wy_c*c + ... + wy_l*l; {Eq. 1b}
126 |
127 |
128 | So, for example, if you have 8 coils, using 3x2 kernel geometries, you will have 8x6 different kernel 129 | weights, where the entire group of weights for all coils is called the kernel or weight set.
130 | 131 |If we write this as a matrix equation, we can see that:
132 | 133 |M = W*S; {Eq. 2a}
134 | or
135 | [X] [wx_a wx_b wx_c ... wx_l] [a]
136 | [Y] [wy_a wy_b wy_c ... wy_l] [b]
137 | . = [ . ] [c] {Eq. 2b}
138 | . [ . ] [d]
139 | [Z] [wz_a wz_b wz_c ... wz_l] [e]
140 | [.]
141 | [.]
142 | [.]
143 | [l]
144 |
145 |
146 | where M=(X,Y,...Z)'
are the missing points from each coil, W
is the kernel weight matrix where each coil's weights comprise one row.
147 | S
is a vector of source points - order is not important, but it is critical to ensure it is consistent with the weights.
Finally, this expression only holds for a fixed kernel geometry (i.e. source - target geometry). For R > 2
(i.e. R-1
missing lines for each measured line),
150 | there will be R-1
distinct kernel sets.
For example, in the case of R=3
, you have 2 distinct kernel geometries to work with:
Coil #1 k-space Coil #2 k-space
155 |
156 | oooooooooooooooo oooooooooooooooo
157 | ---------------- ----------------
158 | ---------------- ---------------- o : Acquired k-space data point
159 | oooooooooooooooo oooooooooooooooo - : Missing k-space data
160 | ---------------- ----------------
161 | ---------------- ---------------- --- ooo
162 | oooooooooooooooo oooooooooooooooo ooo First kernel --- Second kernel
163 | -------X-------- ---------------- -X- geometry -Y- geometry
164 | -------Y-------- ---------------- --- ooo
165 | oooooooooooooooo oooooooooooooooo ooo ---
166 | ---------------- ----------------
167 | ---------------- ----------------
168 |
169 |
170 | If you want to generalise this to R > 2
, ultimately, for C
coils, with kernel size [Nx,Ny]
, and acceleration factor R
, in totality,
171 | you will need to estimate C*(R-1)*C*Nx*Ny
weight coefficients. You can solve this as a single comprehensive system, or because the problems are uncoupled,
172 | you may find it easier to solve each (R-1)
class of sub-problems separately.
So Eq. 2 completely describes how you solve for missing points, given acquired data in some neighbourhood around it.
175 | 176 |The one final piece of information we need is fully sampled "calibration" or "training" data, so that we can actually find the kernel weights. 177 | To do this, we simply solve Eq. 2 for the weights, typically in a least-squares sense (no pun intended), over the calibration data. 178 | In this case, M and S are both matrices, containing known information, representing source-target relationships across the entire calibration region.
179 | 180 |This is what "fitting the kernel" refers to:
181 | 182 |W = M*pinv(S); {Eq. 3a}
183 | or
184 | W = M*S'*(S*S')^-1; {Eq. 3a}
185 |
186 |
187 | To ensure a robust and well-conditioned fit, typically a relatively large calibration region is used fit W
.
188 | Over the calibration data, nearly every point is a potential source and target. Because all the points are present,
189 | we can use this data to learn the shift-invariant geometric relationships in the k-space x coil data.
Coil #1 k-space Coil #2 k-space
192 |
193 | oooooooooooooooo oooooooooooooooo
194 | oooooooooooooooo oooooooooooooooo
195 | oooooooooooooooo oooooooooooooooo o : Acquired k-space calibration data
196 | oooooooooooooooo oooooooooooooooo
197 | oooooooooooooooo oooooooooooooooo
198 | oooooooooooooooo oooooooooooooooo
199 | oooooooooooooooo oooooooooooooooo
200 | oooooooooooooooo oooooooooooooooo
201 | oooooooooooooooo oooooooooooooooo
202 | oooooooooooooooo oooooooooooooooo
203 | oooooooooooooooo oooooooooooooooo
204 | oooooooooooooooo oooooooooooooooo
205 |
206 |
207 | So to solve Eq. 3, we "move" the kernel over the entire calibration space, and for every source-target pairing, we get an additional
208 | column in M
and S
:
Coil #1 k-space Coil #2 k-space
211 |
212 | a1 b1 c1 - g1 h1 i1 - [X1] [wx_a wx_b wx_c ... wx_l] [a1]
213 | [Y1] [wy_a wy_b wy_c ... wy_l] [b1]
214 | - X1 - - - Y1 - - [ .] = [ . ] [c1]
215 | [ .] [ . ] [d1]
216 | d1 e1 f1 - j1 k1 l1 - [Z1] [wz_a wz_b wz_c ... wz_l] [e1]
217 | [ .]
218 | - - - - - - - - [ .]
219 | [l1]
220 |
221 | Coil #1 k-space Coil #2 k-space
222 |
223 | - a2 b2 c2 - g2 h2 i2 [X1 X2] [wx_a wx_b wx_c ... wx_l] [a1 a2]
224 | [Y1 Y2] [wy_a wy_b wy_c ... wy_l] [b1 b2]
225 | - - X2 - - - Y2 - [ . .] = [ . ] [c1 c2]
226 | [ . .] [ . ] [d1 d2]
227 | - d2 e2 f2 - j2 k2 l2 [Z1 Z2] [wz_a wz_b wz_c ... wz_l] [e1 e2]
228 | [ . .]
229 | - - - - - - - - [ . .]
230 | [l1 l2]
231 |
232 | Coil #1 k-space Coil #2 k-space
233 |
234 | - - - - - - - - [X1 X2 ... Xn] [wx_a wx_b wx_c ... wx_l] [a1 a2 ... an]
235 | [Y1 Y2 ... Yn] [wy_a wy_b wy_c ... wy_l] [b1 b2 ... bn]
236 | - an bn cn - gn hn in [ . . ... .] = [ . ] [c1 c2 ... cn]
237 | [ . . ... .] [ . ] [d1 d2 ... dn]
238 | - - Xn - - - Yn - [Z1 Z2 ... Zn] [wz_a wz_b wz_c ... wz_l] [e1 e2 ... en]
239 | [ . . ... .]
240 | - dn en fn - jn kn ln [ . . ... .]
241 | [l1 l2 ... ln]
242 |
243 |
244 | Now that you have M
and S
fully populated, you can solve Eq. 3 in the least squares sense, by pseudo-inverting S
:
[wx_a wx_b wx_c ... wx_l] [X1 X2 ... Xn] [a1 a2 ... an]
247 | [wy_a wy_b wy_c ... wy_l] [Y1 Y2 ... Yn] [b1 b2 ... bn]
248 | [ . ] = [ . . ... .] *pinv([c1 c2 ... cn]) Eq. {4}
249 | [ . ] [ . . ... .] [d1 d2 ... dn]
250 | [wz_a wz_b wz_c ... wz_l] [Z1 Z2 ... Zn] [e1 e2 ... en]
251 | [ . . ... .]
252 | [ . . ... .]
253 | [l1 l2 ... ln]
254 |
255 |
256 | Ultimately, the entire GRAPPA algorithm boils down to:
259 | 260 |W
over the calibration dataM
, using the calibrated W
, over the actual under-sampled dataNow that you have a basic sense of the internal logic behind GRAPPA (the what), we'll get into a step-by-step practical on 271 | how to actually go through the mechanics of writing a GRAPPA-based image reconstruction program (the how).
272 | 273 |I've included skeleton code for each step, that you're free to use if you like. I've also included a full working step-by-step solution if 274 | you get stuck on any step. Use as much or as little of the provided solution as you like.
275 | 276 |At the end of the tutorial, I'll just briefly walk through my solution code.
277 | 278 |Source file: grappa.m
We'll use grappa.m
as our main function that takes in undersampled data and returns reconstructed data.
283 | We'll also be defining separate functions for most of the other steps, and the provided grappa.m
file is already organised this way for you.
284 | I recommend you use the provided grappa.m
file.
Source files: grappa_get_pad.m
, grappa_pad_data.m
, grappa_unpad_data.m
We need to pad the k-space data that we're working with in order to accommodate kernels being applied at the boundary of the actual data. 291 | Because the kernels extend for some width beyond the target point, if the reconstruction target is at the edge, the kernel will necessarily 292 | need to grab data from beyond.
293 | 294 |This is organised into 2 parts:
295 | grappa_get_pad.m
should return you the size of padding needed in each dimension given your kernel size and under-sampling factor
296 | grappa_pad_data.m
should perform the padding operation
297 | grappa_unpad_data.m
should perform the un-padding operation
Source file: grappa_get_indices.m
This is in my opinion the trickiest part of the practical GRAPPA problem. In order to perform weight estimation and application, you 304 | will need to be able to know what the co-ordinates are of every source point relative to its target point. It's not difficult to picture 305 | in your head, but making sure you've got your indexing correct is pretty crucial.
306 | 307 |You will need to return an array of source indices, where each column is paired with the corresponding element in the target index vector. 308 | I use linear indexing for this (i.e., I index the 3D data arrays from 1 to CNxNy linearly, instead of using subscripts).
309 | 310 |I strongly recommend simply using the provided solution, unless you're feeling particularly keen. In my solution, for simplicity, I require that 311 | the kernel size be odd in the kx-direction, and even in the ky-direction.
312 | 313 |Source file: grappa_estimate_weights.m
Given paired set of source and target indices, you must now use those to collect corresponding dataset pairs from the calibration k-space 318 | in order to perform weight estimation. Don't forget that the weights must map data from all coils to the target points in each coil.
319 | 320 |You can do whatever you like here, if you know what you're doing. Otherwise, I recommend you perform a least squares fit. 321 | The easiest way to do that, is to recognise that this is simply a linear regression problem as laid out above, and use the pseudoinverse (pinv in MATLAB) 322 | Eq. {4} is basically what you should get.
323 | 324 |The way I have structured my solution is to estimate weights for each of the R-1 missing line groups (or kernel geometries) separately. 325 | This makes the organisation a bit simpler, and is mathematically equivalent to solving for all weight sets at once. Feel free to try to implement 326 | the all-in-one approach if you have extra time.
327 | 328 |Source file: grappa_apply_weights.m
Finally, the weights estimated need to be used to reconstruct missing data. Here you also need to identify source and target indices from the 333 | actual under-sampled data, and then use the weights you derive to solve the reconstruction problem (Eq. {2}).
334 | 335 |Again, depending on whether you separated the weight estimation into R-1 subproblems or one big problem, you will need to either loop over 336 | all your subproblems to get the final reconstruction, or simply apply your weights to all missing points at once.
337 | 338 |Source file: example.m
To evaluate your reconstruction, I have provided a simple script and some data that perform some toy under-sampling problems.
343 | If you've implemented grappa.m
and everything else correctly, this should run and give you the expected outputs.
Using this exact framework, it is relatively simple to perform the following extensions: 348 | (I have code for these, they're all in some form or another on psg.fmrib.ox.ac.uk/u/mchiew/projects)
349 | 350 |