├── .github └── ISSUE_TEMPLATE │ ├── bug_report.md │ └── feature_request.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── JOSS_figs ├── FIG1_3_copy.png ├── Imagen3.png ├── Imagen7.png ├── SoS_ORC.png ├── SoS_ORC_v2.png ├── noise_sim.png ├── noise_sim_epi.png └── simfig.png ├── LICENSE ├── OCTOPUS ├── Examples │ ├── ORC_main.py │ ├── display_panel_images.py │ ├── examples_zip.zip │ ├── numsim_cartesian.py │ ├── numsim_epi.py │ ├── numsim_spiral.py │ ├── sample_data │ │ ├── GRE_GS_dicom.IMA │ │ ├── SS_sprial_ktraj.npy │ │ ├── dcf.mat │ │ ├── fieldmap_unwrapped.nii.gz │ │ ├── ktraj.npy │ │ ├── rawdata_b0map.mat │ │ ├── rawdata_spiral.mat │ │ └── slph_im.npy │ └── settings.ini ├── ORC.py ├── __init__.py ├── fieldmap │ ├── __init__.py │ ├── simulate.py │ └── unwrap.py ├── prog.py ├── recon │ ├── __init__.py │ ├── imtransforms.py │ └── rawdata_recon.py └── utils │ ├── __init__.py │ ├── dataio.py │ ├── log_kernel.mat │ ├── metrics.py │ └── plotting.py ├── OCTOPUSLogo.png ├── OCTOPUSLogo_Rect.png ├── README.md ├── doc ├── Makefile ├── make.bat └── source │ ├── OCTOPUS.fieldmap.rst │ ├── OCTOPUS.recon.rst │ ├── OCTOPUS.rst │ ├── OCTOPUS.utils.rst │ ├── conf.py │ ├── index.rst │ └── modules.rst ├── paper.bib ├── paper.md ├── requirements.txt ├── setup.py └── tests ├── test_data ├── acrph_df.mat ├── acrph_df.npy ├── acrph_noncart_ksp.mat ├── ktraj_noncart.npy ├── ktraj_noncart_dcf.npy ├── sl_ph.mat ├── slph_cart_ksp.npy ├── slph_im.npy └── slph_noncart_ksp.npy ├── utests_ORC_methods.py └── utests_imtransform.py /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, gender identity and expression, level of experience, 9 | nationality, personal appearance, race, religion, or sexual identity and 10 | orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the team at . All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at [http://contributor-covenant.org/version/1/4][version] 72 | 73 | [homepage]: http://contributor-covenant.org 74 | [version]: http://contributor-covenant.org/version/1/4/ -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to `OCTOPUS` 2 | :thumbsup: :tada: Thanks for taking time to contribute! :thumbsup: :tada: 3 | 4 | Here are guidelines (not rules!) for contributing to `OCTOPUS`. Use your best judgment, and feel free to propose 5 | changes to this document in a pull request. 6 | 7 | ## Code of Conduct 8 | This project and everyone participating in it is governed by the 9 | [`OCTOPUS` Code of Conduct](https://github.com/imr-framework/pypulseq/blob/master/CODE_OF_CONDUCT.md). 10 | By participating, you are expected to uphold this code. Please report unacceptable behavior to 11 | [imr.framework2018@gmail.com](mailto:imr.framework2018@gmail.com). 12 | 13 | ## Pull requests 14 | Follow the coding conventions laid out in the [Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). Ensure source code is 15 | documented as per the Numpy convention [[numpy1]], [[numpy2]]. If you notice any `OCTOPUS` code not adhering to 16 | [PEP8](https://www.python.org/dev/peps/pep-0008/), submit a pull request or open an issue. 17 | 18 | ## Issues 19 | Please adhere to the appropriate templates when reporting bugs or requesting features. The templates are automatically 20 | presented via Github's 'New Issue' feature. 21 | 22 | [email]: mailto:imr.framework2018@gmail.com 23 | [code_of_conduct]: https://github.com/imr-framework/pypulseq/blob/master/CODE_OF_CONDUCT.md 24 | [style_guide]: https://www.python.org/dev/peps/pep-0008/ 25 | [numpy1]: https://numpydoc.readthedocs.io/en/latest/format.html 26 | [numpy2]: https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html -------------------------------------------------------------------------------- /JOSS_figs/FIG1_3_copy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/FIG1_3_copy.png -------------------------------------------------------------------------------- /JOSS_figs/Imagen3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/Imagen3.png -------------------------------------------------------------------------------- /JOSS_figs/Imagen7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/Imagen7.png -------------------------------------------------------------------------------- /JOSS_figs/SoS_ORC.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/SoS_ORC.png -------------------------------------------------------------------------------- /JOSS_figs/SoS_ORC_v2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/SoS_ORC_v2.png -------------------------------------------------------------------------------- /JOSS_figs/noise_sim.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/noise_sim.png -------------------------------------------------------------------------------- /JOSS_figs/noise_sim_epi.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/noise_sim_epi.png -------------------------------------------------------------------------------- /JOSS_figs/simfig.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/JOSS_figs/simfig.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /OCTOPUS/Examples/ORC_main.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Main script for off-resonance correction 4 | Author: Marina Manso Jimeno 5 | Last modified: 07/18/2020 6 | ''' 7 | import configparser 8 | import scipy.io as sio 9 | import numpy as np 10 | import nibabel as nib 11 | import matplotlib.pyplot as plt 12 | import math 13 | import time 14 | import os 15 | 16 | from OCTOPUS.utils.dataio import get_data_from_file 17 | from OCTOPUS.recon.rawdata_recon import fmap_recon 18 | from OCTOPUS.recon.rawdata_recon import spiral_recon 19 | from OCTOPUS.utils.plotting import plot_correction_results 20 | 21 | import OCTOPUS.ORC as ORC 22 | 23 | # Read settings.ini configuration file 24 | path_settings = 'settings.ini' 25 | config = configparser.ConfigParser() 26 | config.read(path_settings) 27 | inputs = config['CORRECTION INPUTS'] 28 | outputs = config['CORRECTION OUTPUTS'] 29 | 30 | ## 31 | # Load the input data (raw data, k-space trajectory and field map) 32 | ## 33 | FOV = 384e-3 # meters 34 | dt = 10e-6 # seconds 35 | TE = 4.6e-3 # seconds 36 | 37 | rawdata = get_data_from_file(inputs['path_rawdatspiral_file']) 38 | ktraj = get_data_from_file(inputs['path_ktraj_file']) 39 | 40 | # Optional 41 | if inputs['path_dcf_file']: 42 | dcf = get_data_from_file(inputs['path_dcf_file']).flatten() # Density correction factor 43 | 44 | if inputs['path_fmapunwrapped_file']: 45 | fmap = np.fliplr(get_data_from_file(inputs['path_fmapunwrapped_file']) / (2 * math.pi)) 46 | else: 47 | fmap = fmap_recon(inputs['path_rawfmap_file'], 2.46e-3, method='HP', plot = 0, save = 0) 48 | 49 | if len(fmap.shape) == 2: 50 | fmap = np.expand_dims(fmap, axis=2) 51 | 52 | plt.imshow(np.rot90(fmap[:,:,0], -1), cmap='gray') 53 | plt.axis('off') 54 | plt.colorbar() 55 | plt.title('Field map') 56 | plt.show() 57 | 58 | ## 59 | # Dimensions check 60 | ## 61 | 62 | if fmap.shape[0] != fmap.shape[1]: 63 | raise ValueError('Image and field map should have square dimensions (NxN)') 64 | if rawdata.shape[0] != ktraj.shape[0] or rawdata.shape[1] != ktraj.shape[1]: 65 | raise ValueError('The raw data does not agree with the k-space trajectory') 66 | 67 | ## 68 | # Useful parameters 69 | ## 70 | N = fmap.shape[0] # Matrix size 71 | Npoints = rawdata.shape[0] 72 | Nshots = rawdata.shape[1] 73 | Nchannels = rawdata.shape[-1] 74 | if len(rawdata.shape) < 4: 75 | rawdata = rawdata.reshape(Npoints, Nshots, 1, Nchannels) 76 | Nslices = rawdata.shape[2] 77 | 78 | t_ro = Npoints * dt # read-out time, hard-coded for Siemens-Pulseq 79 | T = np.linspace(TE, TE + t_ro, Npoints).reshape((Npoints, 1)) 80 | seq_params = {'FOV': FOV, 'N': N, 'Npoints': Npoints, 'Nshots': Nshots, 't_vector': T, 't_readout': t_ro} 81 | #if 'dcf' in globals(): 82 | # seq_params.update({'dcf': dcf}) 83 | 84 | 85 | ## 86 | # Plot the original data 87 | ## 88 | original_im = spiral_recon(inputs['path_rawdatspiral_file'], ktraj, N, plot=1, save=0) 89 | 90 | ## 91 | # Off resonance correction 92 | ## 93 | if not os.path.isdir(outputs['path_correction_folder']): 94 | os.mkdir(outputs['path_correction_folder']) 95 | 96 | Lx = 2 # Frequency segments for fs-CPR and MFI. L = Lx * Lmin 97 | if Lx < 1: 98 | raise ValueError('The L factor cannot be lower that 1 (minimum L)') 99 | 100 | CPR_result = np.zeros((N, N, Nslices, Nchannels), dtype=complex) 101 | fsCPR_result = np.zeros((N, N, Nslices, Nchannels), dtype=complex) 102 | MFI_result = np.zeros((N, N, Nslices, Nchannels), dtype=complex) 103 | CPR_timing = 0 104 | fsCPR_timing = 0 105 | MFI_timing = 0 106 | for ch in range(Nchannels): 107 | for sl in range(Nslices): 108 | before = time.time() 109 | CPR_result[:, :, sl, ch] = ORC.CPR(np.squeeze(rawdata[:, :, sl, ch]), 'raw', ktraj, 110 | np.squeeze(fmap[:, :, sl]), 1, seq_params) 111 | CPR_timing += time.time() - before 112 | 113 | print('CPR: Done with slice:' + str(sl + 1) + ', channel:' + str(ch + 1)) 114 | np.save(outputs['path_correction_folder'] + 'CPR', CPR_result) 115 | 116 | before = time.time() 117 | fsCPR_result[:, :, sl, ch] = ORC.fs_CPR(np.squeeze(rawdata[:, :, sl, ch]),'raw', ktraj, np.squeeze(fmap[:, :, sl]), Lx, 1, seq_params) 118 | fsCPR_timing += time.time() - before 119 | 120 | print('fsCPR: Done with slice:' + str(sl + 1) + ', channel:' + str(ch + 1)) 121 | np.save(outputs['path_correction_folder'] + 'fsCPR_Lx' + str(Lx), fsCPR_result) 122 | 123 | before = time.time() 124 | MFI_result[:, :, sl, ch] = ORC.MFI(np.squeeze(rawdata[:, :, sl, ch]),'raw', ktraj, np.squeeze(fmap[:, :, sl]), Lx, 1,seq_params) 125 | MFI_timing += time.time() - before 126 | 127 | print('MFI: Done with slice:' + str(sl + 1) + ', channel:' + str(ch + 1)) 128 | np.save(outputs['path_correction_folder'] + 'MFI_Lx' + str(Lx), MFI_result) 129 | 130 | ## 131 | # Display the results 132 | ## 133 | print('\nCPR correction took ' + str(CPR_timing) + ' seconds.') 134 | print('\nFs-CPR correction took ' + str(fsCPR_timing) + ' seconds.') 135 | print('\nMFI correction took ' + str(MFI_timing) + ' seconds.') 136 | 137 | sos_CPR = np.sqrt(np.sum(np.abs(CPR_result)**2, -1)) 138 | sos_CPR = np.divide(sos_CPR, np.max(sos_CPR)) 139 | 140 | sos_fsCPR = np.sqrt(np.sum(np.abs(fsCPR_result)**2, -1)) 141 | sos_fsCPR = np.divide(sos_fsCPR, np.max(sos_fsCPR)) 142 | 143 | sos_MFI = np.sqrt(np.sum(np.abs(MFI_result)**2, -1)) 144 | sos_MFI = np.divide(sos_MFI, np.max(sos_MFI)) 145 | 146 | im_stack = np.stack((np.squeeze(np.rot90(original_im,-1)), np.squeeze(np.rot90(sos_CPR,-1)), np.squeeze(np.rot90(sos_fsCPR,-1)), np.squeeze(np.rot90(sos_MFI,-1)))) 147 | cols = ('Corrupted Image','CPR Correction', 'fs-CPR Correction', 'MFI Correction') 148 | row_names = (' ', ) 149 | plot_correction_results(im_stack, cols, row_names) 150 | 151 | 152 | 153 | -------------------------------------------------------------------------------- /OCTOPUS/Examples/display_panel_images.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Displays a panel with column: Gold standard, Uncorrected spiral, Spiral CPR corrected, spiral fs-CPR corrected and spiral MFI corrected 4 | Author: Marina Manso Jimeno 5 | Last updated: 07/08/2020 6 | ''' 7 | import numpy as np 8 | import configparser 9 | from OCTOPUS.utils.dataio import read_dicom 10 | from OCTOPUS.utils.plotting import plot_correction_results 11 | from OCTOPUS.utils.metrics import create_table 12 | 13 | 14 | # Read settings.ini configuration file 15 | path_settings = 'settings.ini' 16 | config = configparser.ConfigParser() 17 | config.read(path_settings) 18 | paths = list(config['DISPLAY'].items()) 19 | 20 | data_paths = [path[1] for path in paths if path[1] != ''] 21 | 22 | N = 192 23 | Nranges = len(data_paths) 24 | 25 | 26 | GS = [] 27 | uncorrected = [] 28 | CPR = [] 29 | fs_CPR = [] 30 | MFI = [] 31 | for path in data_paths: 32 | vol_GS = read_dicom(path + 'GRE.dcm') 33 | vol_uncorr = np.load(path + 'spiral_bc_im.npy') 34 | vol_CPR = np.load(path + 'corrections/CPR.npy') 35 | vol_fsCPR = np.load(path + 'corrections/fsCPR_Lx2.npy') 36 | vol_MFI = np.load(path + 'corrections/MFI_Lx2.npy') 37 | 38 | 39 | sos_CPR = np.divide(np.sqrt(np.sum(np.abs(vol_CPR)**2, -1)), np.max(np.sqrt(np.sum(np.abs(vol_CPR)**2, -1)))) 40 | sos_fsCPR = np.divide(np.sqrt(np.sum(np.abs(vol_fsCPR)**2, -1)), np.max(np.sqrt(np.sum(np.abs(vol_fsCPR)**2, -1)))) 41 | sos_MFI = np.divide(np.sqrt(np.sum(np.abs(vol_MFI)**2, -1)), np.max(np.sqrt(np.sum(np.abs(vol_MFI)**2, -1)))) 42 | 43 | GS.append(np.rot90(vol_GS,0)) 44 | uncorrected.append(np.rot90(vol_uncorr,-1)) 45 | CPR.append(np.rot90(sos_CPR, -1)) 46 | fs_CPR.append(np.rot90(sos_fsCPR, -1)) 47 | MFI.append(np.rot90(sos_MFI,-1)) 48 | 49 | GS = np.fliplr(np.moveaxis(np.squeeze(np.stack(GS)), 0, -1)) 50 | uncorrected = np.fliplr(np.moveaxis(np.stack(np.squeeze(uncorrected)), 0, -1)) 51 | CPR = np.fliplr(np.moveaxis(np.stack(np.squeeze(CPR)), 0, -1)) 52 | fs_CPR = np.fliplr(np.moveaxis(np.stack(np.squeeze(fs_CPR)), 0, -1)) 53 | MFI = np.fliplr(np.moveaxis(np.stack(np.squeeze(MFI)), 0, -1)) 54 | 55 | cols = ('Gold Standard','Corrupted Image','CPR Correction', 'fs-CPR Correction', 'MFI Correction') 56 | #rows = ('Shimmed', 'Mid-shimmed', 'Not shimmed') 57 | rows = ('Shimmed', 'X shim = -90', 'X shim = 0') 58 | im_stack = np.stack((GS, uncorrected, CPR, fs_CPR, MFI)) 59 | 60 | plot_correction_results(im_stack, cols, rows) 61 | 62 | #cols = ('Gold Standard', 'Corrupted Image','CPR Correction', 'fs-CPR Correction', 'MFI Correction') 63 | #create_table(im_stack, cols, rows) 64 | -------------------------------------------------------------------------------- /OCTOPUS/Examples/examples_zip.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/examples_zip.zip -------------------------------------------------------------------------------- /OCTOPUS/Examples/numsim_cartesian.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Numerical Simulation Experiments with Cartesian trajectories 4 | Author: Marina Manso Jimeno 5 | Last updated: 07/15/2020 6 | ''' 7 | import numpy as np 8 | import matplotlib.pyplot as plt 9 | import cv2 10 | 11 | import OCTOPUS.fieldmap.simulate as fieldmap_sim 12 | import OCTOPUS.ORC as ORC 13 | from OCTOPUS.utils.plotting import plot_correction_results 14 | from OCTOPUS.utils.metrics import create_table 15 | from OCTOPUS.recon.rawdata_recon import mask_by_threshold 16 | 17 | from skimage.data import shepp_logan_phantom 18 | from skimage.transform import resize 19 | from skimage.util import img_as_float, random_noise 20 | from skimage.segmentation import flood_fill 21 | ## 22 | # Original image: Shep-Logan Phantom 23 | ## 24 | 25 | def numsim_cartesian(): 26 | 27 | N = 192 28 | ph = resize(shepp_logan_phantom(), (N,N)) 29 | 30 | 31 | ph = ph.astype(complex) 32 | plt.imshow(np.abs(ph), cmap='gray') 33 | plt.title('Original phantom') 34 | plt.axis('off') 35 | plt.colorbar() 36 | plt.show() 37 | 38 | brain_mask = mask_by_threshold(ph) 39 | 40 | # Floodfill from point (0, 0) 41 | ph_holes = ~(flood_fill(brain_mask, (0, 0), 1).astype(int)) + 2 42 | mask = brain_mask + ph_holes 43 | 44 | ## 45 | # Cartesian k-space trajectory 46 | ## 47 | dt = 10e-6 # grad raster time 48 | ktraj_cart = np.arange(0, N * dt, dt).reshape(1,N) 49 | ktraj_cart = np.tile(ktraj_cart, (N, 1)) 50 | plt.imshow(ktraj_cart, cmap='gray') 51 | plt.title('Cartesian trajectory') 52 | plt.show() 53 | 54 | ## 55 | # Simulated field map 56 | ## 57 | fmax_v = [1600, 3200, 4800] # Hz correspontig to 25, 50 and 75 ppm at 3T 58 | i = 0 59 | 60 | 61 | or_corrupted = np.zeros((N, N, len(fmax_v)), dtype='complex') 62 | or_corrected_CPR = np.zeros((N, N, len(fmax_v)), dtype='complex') 63 | or_corrected_fsCPR = np.zeros((N, N, len(fmax_v)), dtype='complex') 64 | or_corrected_MFI = np.zeros((N, N, len(fmax_v)), dtype='complex') 65 | for fmax in fmax_v: 66 | field_map = fieldmap_sim.realistic(np.abs(ph), mask, fmax) 67 | 68 | 69 | plt.imshow(field_map, cmap='gray') 70 | plt.title('Field Map') 71 | plt.colorbar() 72 | plt.axis('off') 73 | plt.show() 74 | 75 | ## 76 | # Corrupted images 77 | ## 78 | or_corrupted[:,:,i],_ = ORC.add_or_CPR(ph, ktraj_cart, field_map) 79 | corrupt = (np.abs(or_corrupted[:,:,i]) - np.abs(or_corrupted[...,i]).min())/(np.abs(or_corrupted[:,:,i]).max() - np.abs(or_corrupted[...,i]).min()) 80 | #plt.imshow(np.abs(or_corrupted[:,:,i]),cmap='gray') 81 | plt.imshow(corrupt, cmap='gray') 82 | plt.colorbar() 83 | plt.title('Corrupted Image') 84 | plt.axis('off') 85 | plt.show() 86 | 87 | ### 88 | # Corrected images 89 | ### 90 | or_corrected_CPR[:, :, i] = ORC.CPR(or_corrupted[:, :, i], 'im', ktraj_cart, field_map) 91 | or_corrected_fsCPR[:, :, i] = ORC.fs_CPR(or_corrupted[:, :, i], 'im', ktraj_cart, field_map, 2) 92 | or_corrected_MFI[:,:,i] = ORC.MFI(or_corrupted[:,:,i], 'im', ktraj_cart, field_map, 2) 93 | i += 1 94 | 95 | ## 96 | # Plot 97 | ## 98 | im_stack = np.stack((np.squeeze(or_corrupted), np.squeeze(or_corrected_CPR), np.squeeze(or_corrected_fsCPR), np.squeeze(or_corrected_MFI))) 99 | cols = ('Corrupted Image', 'CPR Correction', 'fs-CPR Correction', 'MFI Correction') 100 | row_names = ('-/+ 1600 Hz', '-/+ 3200 Hz', '-/+ 4800 Hz') 101 | plot_correction_results(im_stack, cols, row_names) 102 | 103 | # np.save('or_corrupted.npy', or_corrupted) 104 | # np.save('or_corrected_CPR.npy', or_corrected_CPR) 105 | # np.save('or_corrected_fsCPR.npy', or_corrected_fsCPR) 106 | # np.save('or_corrected_MFI.npy', or_corrected_MFI) 107 | ## 108 | # Metrics 109 | ## 110 | #im_stack = np.stack((np.dstack((ph, ph, ph)), or_corrupted, or_corrected_CPR, or_corrected_fsCPR, or_corrected_MFI)) 111 | #create_table(im_stack, cols, row_names) 112 | 113 | if __name__ == "__main__": 114 | numsim_cartesian() -------------------------------------------------------------------------------- /OCTOPUS/Examples/numsim_epi.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Numerical Simulation Experiments with EPI trajectory 4 | Author: Marina Manso Jimeno 5 | Last updated: 01/19/2021 6 | ''' 7 | import numpy as np 8 | import matplotlib.pyplot as plt 9 | import cv2 10 | 11 | from skimage.data import shepp_logan_phantom 12 | from skimage.transform import resize 13 | from skimage.segmentation import flood_fill 14 | from skimage.util import img_as_float, random_noise 15 | from skimage.filters import gaussian 16 | 17 | import OCTOPUS.ORC as ORC 18 | import OCTOPUS.fieldmap.simulate as fieldmap_sim 19 | from OCTOPUS.recon.rawdata_recon import mask_by_threshold 20 | from OCTOPUS.utils.plotting import plot_correction_results 21 | 22 | 23 | def ssEPI_2d(N, FOV): 24 | ''' 25 | Generates a single-shot EPI trajectory 26 | 27 | Parameters 28 | ---------- 29 | N : int 30 | Matrix size 31 | FOV : float 32 | FOV in meters 33 | 34 | Returns 35 | ------- 36 | ktraj : numpy.ndarray 37 | k-space trajcectory in 1/m 38 | ''' 39 | kmax2 = N / FOV 40 | kx = np.arange(int(-kmax2 / 2), int(kmax2 / 2) + 1, kmax2 / (N - 1)) 41 | ky = np.arange(int(-kmax2 / 2), int(kmax2 / 2) + 1, kmax2 / (N - 1)) 42 | ktraj = [] # np.zeros((N**2, 2)) 43 | for i, ky_i in enumerate(ky): 44 | for kx_i in kx: 45 | if i % 2 == 0: 46 | ktraj.append([kx_i, ky_i]) 47 | else: 48 | ktraj.append([-kx_i, ky_i]) 49 | ktraj = np.stack(ktraj) 50 | return ktraj 51 | 52 | def numsim_epi(): 53 | ## 54 | # Original image: Shep-Logan Phantom 55 | ## 56 | N = 128 57 | FOV = 256e-3 58 | ph = resize(shepp_logan_phantom(), (N,N))#.astype(complex) 59 | plt.imshow(np.abs(ph), cmap='gray') 60 | plt.title('Original phantom') 61 | plt.axis('off') 62 | plt.colorbar() 63 | plt.show() 64 | 65 | brain_mask = mask_by_threshold(ph) 66 | 67 | # Floodfill from point (0, 0) 68 | ph_holes = ~(flood_fill(brain_mask, (0, 0), 1).astype(int)) + 2 69 | mask = brain_mask + ph_holes 70 | 71 | ## 72 | # EPI k-space trajectory 73 | ## 74 | dt = 4e-6 75 | ktraj = ssEPI_2d(N, FOV) # k-space trajectory 76 | 77 | plt.plot(ktraj[:,0], ktraj[:,1]) 78 | plt.title('EPI trajectory') 79 | plt.show() 80 | 81 | Ta = ktraj.shape[0] * dt 82 | T = (np.arange(ktraj.shape[0]) * dt).reshape(ktraj.shape[0], 1) 83 | seq_params = {'N': N, 'Npoints': ktraj.shape[0], 'Nshots': 1, 't_readout': Ta, 84 | 't_vector': T} 85 | 86 | ## 87 | # Simulated field map 88 | ## 89 | # fmax_v = [50, 75, 100] # Hz 90 | fmax_v = [100, 150, 200] 91 | 92 | i = 0 93 | or_corrupted = np.zeros((N, N, len(fmax_v)), dtype='complex') 94 | or_corrected_CPR = np.zeros((N, N, len(fmax_v)), dtype='complex') 95 | or_corrected_fsCPR = np.zeros((N, N, len(fmax_v)), dtype='complex') 96 | or_corrected_MFI = np.zeros((N, N, len(fmax_v)), dtype='complex') 97 | for fmax in fmax_v: 98 | # field_map = fieldmap_sim.realistic(np.abs(ph), mask, fmax) 99 | 100 | SL_smooth = gaussian(ph, sigma=3) 101 | field_map = cv2.normalize(SL_smooth, None, -fmax, fmax, cv2.NORM_MINMAX) 102 | field_map = np.round(field_map * mask) 103 | field_map[np.where(field_map == -0.0)] = 0 104 | 105 | plt.imshow(field_map, cmap='gray') 106 | plt.title('Field Map +/-' + str(fmax) + ' Hz') 107 | plt.colorbar() 108 | plt.axis('off') 109 | plt.show() 110 | 111 | ## 112 | # Corrupted images 113 | ## 114 | or_corrupted[:, :, i], EPI_ksp = ORC.add_or_CPR(ph, ktraj, field_map, 'EPI', seq_params) 115 | corrupt = (np.abs(or_corrupted[:, :, i]) - np.abs(or_corrupted[..., i]).min()) / ( 116 | np.abs(or_corrupted[:, :, i]).max() - np.abs(or_corrupted[..., i]).min()) 117 | # plt.imshow(np.abs(or_corrupted[:,:,i]),cmap='gray') 118 | plt.imshow(corrupt, cmap='gray') 119 | plt.colorbar() 120 | plt.title('Corrupted Image') 121 | plt.axis('off') 122 | plt.show() 123 | 124 | ### 125 | # Corrected images 126 | ### 127 | or_corrected_CPR[:, :, i] = ORC.correct_from_kdat('CPR', EPI_ksp, ktraj, field_map, seq_params, 128 | 'EPI') 129 | or_corrected_fsCPR[:,:,i] = ORC.correct_from_kdat('fsCPR', EPI_ksp, ktraj, field_map, seq_params, 'EPI') 130 | or_corrected_MFI[:, :, i] = ORC.correct_from_kdat('MFI', EPI_ksp, ktraj, field_map, 131 | seq_params, 'EPI') 132 | 133 | # or_corrected_CPR[:, :, i] = ORC.CPR(or_corrupted[:, :, i], 'im', ktraj, field_map, 'EPI', seq_params) 134 | # 135 | # or_corrected_fsCPR[:, :, i] = ORC.fs_CPR(or_corrupted[:, :, i], 'im', ktraj, field_map, 2, 'EPI', seq_params) 136 | # 137 | # or_corrected_MFI[:, :, i] = ORC.MFI(or_corrupted[:, :, i], 'im', ktraj, field_map, 2, 'EPI', seq_params) 138 | 139 | i += 1 140 | 141 | ## 142 | # Plot 143 | ## 144 | im_stack = np.stack((np.squeeze(or_corrupted), np.squeeze(or_corrected_CPR), np.squeeze(or_corrected_fsCPR), 145 | np.squeeze(or_corrected_MFI))) 146 | # np.save('im_stack.npy',im_stack) 147 | cols = ('Corrupted Image', 'CPR Correction', 'fs-CPR Correction', 'MFI Correction') 148 | row_names = ('-/+ 100 Hz', '-/+ 150 Hz', '-/+ 200 Hz') 149 | plot_correction_results(im_stack, cols, row_names) 150 | if __name__ == "__main__": 151 | numsim_epi() -------------------------------------------------------------------------------- /OCTOPUS/Examples/numsim_spiral.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Numerical Simulation Experiments with spiral trajectory 4 | Author: Marina Manso Jimeno 5 | Last updated: 07/15/2020 6 | ''' 7 | import numpy as np 8 | import math 9 | import cv2 10 | import matplotlib.pyplot as plt 11 | from skimage.data import shepp_logan_phantom 12 | from skimage.transform import resize 13 | from skimage.segmentation import flood_fill 14 | from skimage.filters import gaussian 15 | 16 | import OCTOPUS.ORC as ORC 17 | import OCTOPUS.fieldmap.simulate as fieldmap_sim 18 | from OCTOPUS.utils.plotting import plot_correction_results 19 | from OCTOPUS.utils.metrics import create_table 20 | from OCTOPUS.recon.rawdata_recon import mask_by_threshold 21 | ## 22 | # Original image: Shep-Logan Phantom 23 | ## 24 | def numsim_spiral(): 25 | N = 128 # ph.shape[0] 26 | ph = resize(shepp_logan_phantom(), (N, N))#.astype(complex) 27 | plt.imshow(np.abs(ph), cmap='gray') 28 | plt.title('Original phantom') 29 | plt.axis('off') 30 | plt.colorbar() 31 | plt.show() 32 | 33 | brain_mask = mask_by_threshold(ph) 34 | 35 | # Floodfill from point (0, 0) 36 | ph_holes = ~(flood_fill(brain_mask, (0, 0), 1).astype(int)) + 2 37 | mask = brain_mask + ph_holes 38 | 39 | 40 | ## 41 | # Spiral k-space trajectory 42 | ## 43 | dt = 4e-6 44 | ktraj = np.load('sample_data/SS_sprial_ktraj.npy') # k-space trajectory 45 | 46 | plt.plot(ktraj.real,ktraj.imag) 47 | plt.title('Spiral trajectory') 48 | plt.show() 49 | 50 | #ktraj_dcf = np.load('test_data/ktraj_noncart_dcf.npy').flatten() # density compensation factor 51 | t_ro = ktraj.shape[0] * dt 52 | T = (np.arange(ktraj.shape[0]) * dt).reshape(ktraj.shape[0],1) 53 | 54 | seq_params = {'N': ph.shape[0], 'Npoints': ktraj.shape[0], 'Nshots': ktraj.shape[1], 't_readout': t_ro, 't_vector': T}#, 'dcf': ktraj_dcf} 55 | ## 56 | # Simulated field map 57 | ## 58 | fmax_v = [100, 150, 200] # Hz 59 | 60 | i = 0 61 | or_corrupted = np.zeros((N, N, len(fmax_v)), dtype='complex') 62 | or_corrected_CPR = np.zeros((N, N, len(fmax_v)), dtype='complex') 63 | or_corrected_fsCPR = np.zeros((N, N, len(fmax_v)), dtype='complex') 64 | or_corrected_MFI = np.zeros((N, N, len(fmax_v)), dtype='complex') 65 | for fmax in fmax_v: 66 | 67 | SL_smooth = gaussian(ph, sigma=3) 68 | field_map = cv2.normalize(SL_smooth, None, -fmax, fmax, cv2.NORM_MINMAX) 69 | field_map = np.round(field_map * mask) 70 | field_map[np.where(field_map == -0.0)] = 0 71 | ### 72 | plt.imshow(field_map, cmap='gray') 73 | plt.title('Field Map +/-' + str(fmax) + ' Hz') 74 | plt.colorbar() 75 | plt.axis('off') 76 | plt.show() 77 | 78 | ## 79 | # Corrupted images 80 | ## 81 | or_corrupted[:,:,i], ksp_corrupted = ORC.add_or_CPR(ph, ktraj, field_map, 1, seq_params) 82 | corrupt = (np.abs(or_corrupted[:,:,i]) - np.abs(or_corrupted[...,i]).min())/(np.abs(or_corrupted[:,:,i]).max() - np.abs(or_corrupted[...,i]).min()) 83 | #plt.imshow(np.abs(or_corrupted[:,:,i]),cmap='gray') 84 | plt.imshow(corrupt, cmap='gray') 85 | plt.colorbar() 86 | plt.title('Corrupted Image') 87 | plt.axis('off') 88 | plt.show() 89 | 90 | ### 91 | # Corrected images 92 | ### 93 | or_corrected_CPR[:, :, i] = ORC.correct_from_kdat('CPR', ksp_corrupted, ktraj, field_map, seq_params, 94 | 1) 95 | or_corrected_fsCPR[:, :, i] = ORC.correct_from_kdat('fsCPR', ksp_corrupted, ktraj, field_map, seq_params, 1) 96 | or_corrected_MFI[:, :, i] = ORC.correct_from_kdat('MFI', ksp_corrupted, ktraj, field_map, 97 | seq_params, 1) 98 | # or_corrected_CPR[:, :, i] = ORC.CPR(or_corrupted[:, :, i], 'im', ktraj, field_map, 1, seq_params) 99 | # 100 | # or_corrected_fsCPR[:, :, i] = ORC.fs_CPR(or_corrupted[:, :, i], 'im', ktraj, field_map, 2, 1, seq_params) 101 | # 102 | # or_corrected_MFI[:,:,i] = ORC.MFI(or_corrupted[:,:,i], 'im', ktraj, field_map, 2, 1, seq_params) 103 | 104 | i += 1 105 | 106 | ## 107 | # Plot 108 | ## 109 | im_stack = np.stack((np.squeeze(or_corrupted), np.squeeze(or_corrected_CPR), np.squeeze(or_corrected_fsCPR), np.squeeze(or_corrected_MFI))) 110 | #np.save('im_stack.npy',im_stack) 111 | cols = ('Corrupted Image','CPR Correction', 'fs-CPR Correction', 'MFI Correction') 112 | row_names = ('-/+ 100 Hz', '-/+ 150 Hz', '-/+ 200 Hz') 113 | plot_correction_results(im_stack, cols, row_names) 114 | 115 | 116 | ## 117 | # Metrics 118 | ## 119 | #im_stack = np.stack((np.dstack((ph, ph, ph)), or_corrupted, or_corrected_CPR, or_corrected_fsCPR, or_corrected_MFI)) 120 | #create_table(im_stack, cols, row_names) 121 | 122 | if __name__ == "__main__": 123 | numsim_spiral() -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/GRE_GS_dicom.IMA: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/GRE_GS_dicom.IMA -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/SS_sprial_ktraj.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/SS_sprial_ktraj.npy -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/dcf.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/dcf.mat -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/fieldmap_unwrapped.nii.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/fieldmap_unwrapped.nii.gz -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/ktraj.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/ktraj.npy -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/rawdata_b0map.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/rawdata_b0map.mat -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/rawdata_spiral.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/rawdata_spiral.mat -------------------------------------------------------------------------------- /OCTOPUS/Examples/sample_data/slph_im.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/Examples/sample_data/slph_im.npy -------------------------------------------------------------------------------- /OCTOPUS/Examples/settings.ini: -------------------------------------------------------------------------------- 1 | [CORRECTION INPUTS] 2 | path_inputs_folder = sample_data\ 3 | path_rawdatspiral_file = sample_data\rawdata_spiral.mat 4 | path_ktraj_file = sample_data\ktraj.npy 5 | path_dcf_file = sample_data\dcf.mat 6 | path_rawfmap_file = 7 | path_fmapunwrapped_file = sample_data\fieldmap_unwrapped.nii.gz 8 | 9 | [CORRECTION OUTPUTS] 10 | path_uncorrectedimage_folder = sample_data\ 11 | path_fielmaprecon_folder = sample_data\ 12 | path_correction_folder = sample_data\corrections\ 13 | 14 | [DISPLAY] 15 | path_display1 = sample_data\ 16 | path_display2 = sample_data\ 17 | path_display3 = sample_data\ 18 | -------------------------------------------------------------------------------- /OCTOPUS/ORC.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Methods for ORC (Off-Resonance Correction) and off-resonance simulation 4 | \nAuthor: Marina Manso Jimeno 5 | \nLast updated: 01/20/2021 6 | ''' 7 | 8 | import numpy.fft as npfft 9 | import numpy as np 10 | 11 | from math import ceil, pi 12 | 13 | from OCTOPUS.recon.imtransforms import ksp2im, im2ksp, nufft_init 14 | 15 | 16 | def check_inputs_cartesian(dataInShape, dataInType, ktShape, dfShape): 17 | '''Check dimensions of the inputs 18 | 19 | Parameters 20 | ---------- 21 | dataInShape : tuple 22 | Shape of input data. Must be [NxN] for image data and [NlinesxNcolumns] for raw data. 23 | dataInType : str 24 | Type of data: im or raw 25 | ktShape : tuple 26 | Shape of k-space trajectory. Must be [NlinesxNcolumns]. 27 | dfShape : tuple 28 | Shape of field map. Must be [NxN] and match the image matrix size. 29 | ''' 30 | if dataInType == 'im': 31 | if dataInShape[0] != dataInShape[1]: 32 | raise ValueError('Data dimensions do not agree with expected image dimensions (NxN)') 33 | if dataInShape != dfShape: 34 | raise ValueError('The image and field map dimensions do not match') 35 | elif dataInType == 'raw': 36 | if dataInShape != ktShape: 37 | raise ValueError('The raw data and k-space trajectory dimensions do not match') 38 | else: 39 | raise ValueError('This type of input data is not supported. Please select im or raw') 40 | 41 | 42 | def check_inputs_noncartesian(dataInShape, dataInType, ktShape, dfShape, params): 43 | '''Check dimensions of the inputs 44 | 45 | Parameters 46 | ---------- 47 | dataInShape : tuple 48 | Shape of input data. Must be [NxN] for image data and [NpointsxNshots] for raw data. 49 | dataInType : str 50 | Type of data: im or raw 51 | ktShape : tuple 52 | Shape of k-space trajectory. Must be [NpointsxNshots]. 53 | dfShape : tuple 54 | Shape of field map. Must be [NxN] and match the image matrix size. 55 | params : dict 56 | Sequence parameters. Required elements are Npoints, Nshots, and N. 57 | ''' 58 | if dataInType == 'im': 59 | if dataInShape[0] != dataInShape[1]: 60 | raise ValueError('Data dimensions do not agree with expected image dimensions (NxN)') 61 | if dataInShape != dfShape: 62 | raise ValueError('The image and field map dimensions do not match') 63 | 64 | elif dataInType == 'raw': 65 | if dataInShape != ktShape: 66 | raise ValueError('The raw data and k-space trajectory dimensions do not match') 67 | 68 | else: 69 | raise ValueError('This type of input data is not supported. Please select im or raw') 70 | 71 | if 'Npoints' not in params: 72 | raise ValueError('The number of acquisition points is missing') 73 | if 'Nshots' not in params: 74 | raise ValueError('The number of shots is missing') 75 | if 'N' not in params: 76 | raise ValueError('The matrix size N is missing') 77 | 78 | if params['N'] != dfShape[0]: 79 | raise ValueError('The image or field map dimensions and value specified for N do not match') 80 | 81 | if params['Npoints'] != ktShape[0]: 82 | raise ValueError('The raw data or trajectory dimensions and value specified for Npoints do not match') 83 | 84 | if params['Nshots'] != ktShape[1]: 85 | raise ValueError('The raw data or trajectory dimensions and value specified for Nshots do not match') 86 | 87 | 88 | 89 | 90 | def add_or(M, kt, df, nonCart = None, params = None): 91 | '''Forward model for off-resonance simulation 92 | 93 | Parameters 94 | ---------- 95 | M : numpy.ndarray 96 | Image data 97 | kt : numpy.ndarray 98 | k-space trajectory 99 | df : numpy.ndarray 100 | Field map 101 | nonCart : int , optional 102 | Cartesian/Non-Cartesian trajectory option. Default is None. 103 | params : dict , optional 104 | Sequence parameters. Default is None. 105 | 106 | Returns 107 | ------- 108 | M_or : numpy.ndarray 109 | Off-resonance corrupted image data 110 | ''' 111 | 112 | '''Create a phase matrix - 2*pi*df*t for every df and every t''' 113 | if nonCart is not None: 114 | cartesian_opt = 0 115 | NufftObj = nufft_init(kt, params) 116 | 117 | else: 118 | cartesian_opt = 1 119 | NufftObj = None 120 | params = None 121 | 122 | kspace = im2ksp(M, cartesian_opt, NufftObj, params) 123 | M_or = np.zeros(M.shape, dtype=complex) 124 | 125 | for x in range(M.shape[0]): 126 | for y in range(M.shape[1]): 127 | phi = 2*pi*df[x,y]*kt 128 | kspace_orc = kspace*np.exp(-1j*phi) 129 | M_corr = ksp2im(kspace_orc, cartesian_opt, NufftObj, params) 130 | M_or[x,y] = M_corr[x,y] 131 | 132 | return M_or 133 | 134 | def add_or_CPR(M, kt, df, nonCart = None, params = None): 135 | '''Forward model for off-resonance simulation. The number of fourier transforms = number of unique values in the field map. 136 | 137 | Parameters 138 | ---------- 139 | M : numpy.ndarray 140 | Image data 141 | kt : numpy.ndarray 142 | k-space trajectory 143 | df : numpy.ndarray 144 | Field map 145 | nonCart : int , optional 146 | Cartesian/Non-Cartesian trajectory option. Default is None. 147 | params : dict , optional 148 | Sequence parameters. Default is None. 149 | 150 | Returns 151 | ------- 152 | M_or : numpy.ndarray 153 | Off-resonance corrupted image data 154 | ''' 155 | # Create a phase matrix - 2*pi*df*t for every df and every t 156 | if nonCart is not None and nonCart == 1: 157 | cartesian_opt = 0 158 | NufftObj = nufft_init(kt, params) 159 | T = np.tile(params['t_vector'], (1, kt.shape[1])) 160 | 161 | elif nonCart == 'EPI': 162 | cartesian_opt = 1 163 | NufftObj = None 164 | T = np.flipud(params['t_vector']).reshape(params['N'], params['N']) 165 | T[1:params['N']:2,:] = np.fliplr(T[1:params['N']:2,:]) 166 | 167 | else: 168 | cartesian_opt = 1 169 | NufftObj = None 170 | params = None 171 | T = kt 172 | 173 | kspace = im2ksp(M, cartesian_opt, NufftObj, params) 174 | 175 | df_values = np.unique(df) 176 | 177 | M_or_CPR = np.zeros((M.shape[0], M.shape[1], len(df_values)), dtype=complex) 178 | kspsave = np.zeros((kspace.shape[0],kspace.shape[1],len(df_values)),dtype=complex) 179 | for i in range(len(df_values)): 180 | phi = - 2 * pi* df_values[i] * T 181 | kspace_or = kspace * np.exp(1j * phi) 182 | kspsave[:,:,i] = kspace_or 183 | M_corr = ksp2im(kspace_or, cartesian_opt, NufftObj, params) 184 | M_or_CPR[:, :, i] = M_corr 185 | 186 | M_or = np.zeros(M.shape, dtype=complex) 187 | for x in range(M.shape[0]): 188 | for y in range(M.shape[1]): 189 | fieldmap_val = df[x, y] 190 | idx = np.where(df_values == fieldmap_val) 191 | M_or[x, y] = M_or_CPR[x, y, idx] 192 | '''plt.imshow(abs(M_or)) 193 | plt.show()''' 194 | return M_or, kspsave 195 | 196 | def orc(M, kt, df): 197 | '''Off-resonance correction for Cartesian trajectories 198 | 199 | Parameters 200 | ---------- 201 | M : numpy.ndarray 202 | Cartesian k-space data 203 | kt : numpy.ndarray 204 | Cartesian k-space trajectory 205 | df : numpy.ndarray 206 | Field map 207 | 208 | Returns 209 | ------- 210 | M_hat : numpy.ndarray 211 | Off-resonance corrected image data 212 | ''' 213 | 214 | kspace = npfft.fftshift(npfft.fft2(M)) 215 | M_hat = np.zeros(M.shape, dtype=complex) 216 | for x in range(M.shape[0]): 217 | for y in range(M.shape[1]): 218 | phi = 2 * pi * df[x, y] * kt 219 | kspace_orc = kspace * np.exp(1j * phi) 220 | M_corr = npfft.ifft2(kspace_orc) 221 | M_hat[x, y] = M_corr[x, y] 222 | 223 | return M_hat 224 | 225 | def CPR(dataIn, dataInType, kt, df, nonCart=None, params=None): 226 | '''Off-resonance Correction by Conjugate Phase Reconstruction 227 | Maeda, A., Sano, K. and Yokoyama, T. (1988), Reconstruction by weighted correlation for MRI with time-varying gradients. IEEE Transactions on Medical Imaging, 7(1): 26-31. doi: 10.1109/42.3926 228 | 229 | Parameters 230 | ---------- 231 | dataIn : numpy.ndarray 232 | k-space raw data or image data 233 | dataInType : str 234 | Can be either 'raw' or 'im' 235 | kt : numpy.ndarray 236 | k-space trajectory. 237 | df : numpy.ndarray 238 | Field map 239 | nonCart : int 240 | Non-cartesian trajectory option. Default is None (Cartesian). 241 | params : dict 242 | Sequence parameters. Default is None (Cartesian). 243 | 244 | Returns 245 | ------- 246 | M_hat : numpy.ndarray 247 | Corrected image data. 248 | ''' 249 | 250 | if nonCart is not None and nonCart == 1: 251 | check_inputs_noncartesian(dataIn.shape, dataInType, kt.shape, df.shape, params) 252 | cartesian_opt = 0 253 | NufftObj = nufft_init(kt, params) 254 | T = np.tile(params['t_vector'], (1, kt.shape[1])) 255 | N = params['N'] 256 | 257 | elif nonCart == 'EPI': 258 | # check_inputs_cartesian(dataIn.shape, dataInType, kt.shape, df.shape) 259 | cartesian_opt = 1 260 | NufftObj = None 261 | T = np.flipud(params['t_vector']).reshape(params['N'], params['N']) 262 | T[1:params['N']:2,:] = np.fliplr(T[1:params['N']:2,:]) 263 | 264 | N = dataIn.shape[0] 265 | 266 | else: 267 | check_inputs_cartesian(dataIn.shape, dataInType, kt.shape, df.shape) 268 | cartesian_opt = 1 269 | NufftObj = None 270 | T = kt 271 | N = dataIn.shape[0] 272 | 273 | if dataInType == 'im': 274 | rawData = im2ksp(dataIn, cartesian_opt, NufftObj, params) 275 | elif dataInType == 'raw': 276 | rawData = dataIn 277 | 278 | df_values = np.unique(df) 279 | M_CPR = np.zeros((N, N, len(df_values)), dtype=complex) 280 | for i in range(len(df_values)): 281 | phi = 2 * pi * df_values[i] * T 282 | kspace_orc = rawData * np.exp(1j * phi) 283 | M_corr = ksp2im(kspace_orc, cartesian_opt, NufftObj, params) 284 | M_CPR[:, :, i] = M_corr 285 | 286 | M_hat = np.zeros((N, N), dtype=complex) 287 | for x in range(df.shape[0]): 288 | for y in range(df.shape[1]): 289 | fieldmap_val = df[x,y] 290 | idx = np.where(df_values == fieldmap_val) 291 | M_hat[x,y] = M_CPR[x,y,idx] 292 | 293 | return M_hat 294 | 295 | def fs_CPR(dataIn, dataInType, kt, df, Lx, nonCart= None, params= None): 296 | '''Off-resonance Correction by frequency-segmented Conjugate Phase Reconstruction 297 | Noll, D. C., Pauly, J. M., Meyer, C. H., Nishimura, D. G. and Macovskj, A. (1992), Deblurring for non‐2D fourier transform magnetic resonance imaging. Magn. Reson. Med., 25: 319-333. doi:10.1002/mrm.1910250210 298 | 299 | Parameters 300 | ---------- 301 | dataIn : numpy.ndarray 302 | k-space raw data or image data 303 | dataInType : str 304 | Can be either 'raw' or 'im' 305 | kt : numpy.ndarray 306 | k-space trajectory 307 | df : numpy.ndarray 308 | Field map 309 | Lx : int 310 | L (frequency bins) factor 311 | nonCart : int 312 | Non-cartesian trajectory option. Default is None (Cartesian). 313 | params : dict 314 | Sequence parameters. Default is None (Cartesian). 315 | 316 | Returns 317 | ------- 318 | M_hat : numpy.ndarray 319 | Corrected image data. 320 | ''' 321 | if nonCart is not None and nonCart == 1: 322 | check_inputs_noncartesian(dataIn.shape, dataInType, kt.shape, df.shape, params) 323 | 324 | cartesian_opt = 0 325 | NufftObj = nufft_init(kt, params) 326 | T = np.tile(params['t_vector'], (1, kt.shape[1])) 327 | 328 | N = params['N'] 329 | t_ro = T[-1, 0] - T[0, 0] # T[end] - TE 330 | 331 | elif nonCart == 'EPI': 332 | # check_inputs_cartesian(dataIn.shape, dataInType, kt.shape, df.shape) 333 | cartesian_opt = 1 334 | NufftObj = None 335 | T = np.flipud(params['t_vector']).reshape(params['N'], params['N']) 336 | T[1:params['N']:2,:] = np.fliplr(T[1:params['N']:2,:]) 337 | N = dataIn.shape[0] 338 | t_ro = params['t_readout'] 339 | 340 | 341 | else: 342 | check_inputs_cartesian(dataIn.shape, dataInType, kt.shape, df.shape) 343 | cartesian_opt = 1 344 | NufftObj = None 345 | N = dataIn.shape[0] 346 | 347 | t_vector = kt[0].reshape(kt.shape[1], 1) 348 | T = kt 349 | t_ro = T[0, -1] - T[0, 0] 350 | 351 | if dataInType == 'im': 352 | rawData = im2ksp(dataIn, cartesian_opt, NufftObj, params) 353 | elif dataInType == 'raw': 354 | rawData = dataIn 355 | 356 | # Number of frequency segments 357 | df_max = max(np.abs([df.max(), df.min()])) # Hz 358 | L = ceil(4 * df_max * 2 * pi * t_ro / pi) 359 | L= L * Lx 360 | if len(np.unique(df)) == 1: 361 | L = 1 362 | f_L = np.linspace(df.min(), df.max(), L + 1) # Hz 363 | 364 | # T = np.tile(params['t_vector'], (1, kt.shape[1])) 365 | 366 | # reconstruct the L basic images 367 | M_fsCPR = np.zeros((N, N, L + 1),dtype=complex) 368 | for l in range(L+1): 369 | phi = 2 * pi * f_L[l] * T 370 | kspace_L = rawData * np.exp(1j * phi) 371 | M_fsCPR[:,:,l] = ksp2im(kspace_L, cartesian_opt, NufftObj, params) 372 | 373 | # final image reconstruction 374 | M_hat = np.zeros((N, N), dtype=complex) 375 | for i in range(M_hat.shape[0]): 376 | for j in range(M_hat.shape[1]): 377 | fieldmap_val = df[i, j] 378 | closest_fL_idx = find_nearest(f_L, fieldmap_val) 379 | 380 | if fieldmap_val == f_L[closest_fL_idx]: 381 | pixel_val = M_fsCPR[i, j, closest_fL_idx] 382 | else: 383 | if fieldmap_val < f_L[closest_fL_idx]: 384 | f_vals = [f_L[closest_fL_idx - 1], f_L[closest_fL_idx]] 385 | im_vals = [M_fsCPR[i, j, closest_fL_idx - 1], M_fsCPR[i, j, closest_fL_idx]] 386 | else: 387 | f_vals = [f_L[closest_fL_idx], f_L[closest_fL_idx + 1]] 388 | im_vals = [M_fsCPR[i, j, closest_fL_idx], M_fsCPR[i, j, closest_fL_idx + 1]] 389 | 390 | pixel_val = np.interp(fieldmap_val, f_vals, im_vals) 391 | 392 | M_hat[i, j] = pixel_val 393 | 394 | return M_hat 395 | 396 | def MFI(dataIn, dataInType, kt , df, Lx , nonCart= None, params= None): 397 | '''Off-resonance Correction by Multi-Frequency Interpolation 398 | Man, L., Pauly, J. M. and Macovski, A. (1997), Multifrequency interpolation for fast off‐resonance correction. Magn. Reson. Med., 37: 785-792. doi:10.1002/mrm.1910370523 399 | 400 | Parameters 401 | ---------- 402 | dataIn : numpy.ndarray 403 | k-space raw data or image data 404 | dataInType : str 405 | Can be either 'raw' or 'im' 406 | kt : numpy.ndarray 407 | k-space trajectory 408 | df : numpy.ndarray 409 | Field map 410 | Lx : int 411 | L (frequency bins) factor 412 | nonCart : int 413 | Non-cartesian trajectory option. Default is None (Cartesian). 414 | params : dict 415 | Sequence parameters. Default is None. 416 | Returns 417 | ------- 418 | M_hat : numpy.ndarray 419 | Corrected image data. 420 | ''' 421 | 422 | if nonCart is not None and nonCart == 1: 423 | check_inputs_noncartesian(dataIn.shape, dataInType, kt.shape, df.shape, params) 424 | cartesian_opt = 0 425 | NufftObj = nufft_init(kt, params) 426 | t_vector = params['t_vector'] 427 | T = np.tile(params['t_vector'], (1, kt.shape[1])) 428 | t_ro = T[-1,0] - T[0,0] # T[end] - TE 429 | N = params['N'] 430 | 431 | elif nonCart == 'EPI': 432 | # check_inputs_cartesian(dataIn.shape, dataInType, kt.shape, df.shape) 433 | cartesian_opt = 1 434 | NufftObj = None 435 | T = np.flipud(params['t_vector']).reshape(params['N'], params['N']) 436 | T[1:params['N']:2,:] = np.fliplr(T[1:params['N']:2,:]) 437 | N = dataIn.shape[0] 438 | t_ro = params['t_readout'] 439 | t_vector = params['t_vector'] 440 | 441 | else: 442 | check_inputs_cartesian(dataIn.shape, dataInType, kt.shape, df.shape) 443 | cartesian_opt = 1 444 | NufftObj = None 445 | N = dataIn.shape[0] 446 | 447 | t_vector = kt[0].reshape(kt.shape[1],1) 448 | T = kt 449 | t_ro = T[0, -1] - T[0,0] 450 | 451 | if dataInType == 'im': 452 | rawData = im2ksp(dataIn, cartesian_opt, NufftObj, params) 453 | elif dataInType == 'raw': 454 | rawData = dataIn 455 | 456 | df = np.round(df, 1) 457 | idx, idy = np.where(df == -0.0) 458 | df[idx, idy] = 0.0 459 | 460 | # Number of frequency segments 461 | df_max = max(np.abs([df.max(), df.min()])) # Hz 462 | df_range = (df.min(), df.max()) 463 | L = ceil(df_max * 2 * pi * t_ro / pi) 464 | L = L * Lx 465 | if len(np.unique(df)) == 1: 466 | L = 1 467 | f_L = np.linspace(df.min(), df.max(), L + 1) # Hz 468 | 469 | #T = np.tile(params['t_vector'], (1, kt.shape[1])) 470 | 471 | # reconstruct the L basic images 472 | M_MFI = np.zeros((N, N, L + 1), dtype=complex) 473 | for l in range(L + 1): 474 | phi = 2 * pi * f_L[l] * T 475 | kspace_L = rawData * np.exp(1j * phi) 476 | M_MFI[:, :, l] = ksp2im(kspace_L, cartesian_opt, NufftObj, params) 477 | 478 | # calculate MFI coefficients 479 | coeffs_LUT = coeffs_MFI_lsq(kt, f_L, df_range, t_vector) 480 | 481 | # final image reconstruction 482 | M_hat = np.zeros((N, N), dtype=complex) 483 | for i in range(M_hat.shape[0]): 484 | for j in range(M_hat.shape[1]): 485 | fieldmap_val = df[i,j] 486 | val_coeffs = coeffs_LUT[str(fieldmap_val)] 487 | M_hat[i, j] = sum(val_coeffs * M_MFI[i, j, :]) 488 | 489 | return M_hat 490 | 491 | def find_nearest(array, value): 492 | '''Finds the index of the value's closest array element 493 | 494 | Parameters 495 | ---------- 496 | array : numpy.ndarray 497 | Array of values 498 | value : float 499 | Value for which the closest element has to be found 500 | 501 | Returns 502 | ------- 503 | idx : int 504 | Index of the closest element of the array 505 | ''' 506 | array = np.asarray(array) 507 | diff = array - value 508 | if value >= 0: 509 | idx = np.abs(diff).argmin() 510 | else: 511 | idx = np.abs(diff[array < 1]).argmin() 512 | 513 | return idx 514 | 515 | def coeffs_MFI_lsq(kt, f_L, df_range, t_vector): 516 | '''Finds the coefficients for Multi-frequency interpolation method by least squares approximation. 517 | 518 | Parameters 519 | ---------- 520 | kt : numpy.ndarray 521 | K-space trajectory 522 | f_L : numpy.ndarray 523 | Frequency segments array. 524 | df_range : tuple 525 | Frequency range of the field map (minimum and maximum). 526 | params : dict 527 | Sequence parameters. 528 | 529 | Returns 530 | ------- 531 | cL : dict 532 | Coefficients look-up-table. 533 | ''' 534 | fs = 0.1 # Hz 535 | f_sampling = np.round(np.arange(df_range[0], df_range[1]+fs, fs), 1) 536 | 537 | alpha = 1.2 # 538 | t_limit = t_vector[-1] 539 | 540 | T = np.linspace(0, alpha * t_limit, len(t_vector)).reshape(-1, ) 541 | 542 | A = np.zeros((len(t_vector), f_L.shape[0]), dtype=complex) 543 | for l in range(f_L.shape[0]): 544 | phi = 2 * pi * f_L[l] * T 545 | A[:, l] = np.exp(1j * phi) 546 | 547 | cL={} 548 | for fs in f_sampling: 549 | b = np.exp(1j * 2 * pi * fs * T) 550 | C = np.linalg.lstsq(A, b, rcond=None) 551 | if fs == -0.0: 552 | fs = 0.0 553 | cL[str(fs)] = C[0][:].reshape((f_L.shape[0])) 554 | 555 | return cL 556 | 557 | def correct_from_kdat(method,kdat,ktraj,fmap,params,cart_opt): 558 | ''' 559 | Corrects numerical simulation data directly from k-space data to avoid artifacts from ksp-im and im-ksp conversion. 560 | 561 | Parameters 562 | ---------- 563 | method: str 564 | Correction method to apply: CPR, fsCPR or MFI 565 | kdat: numpy.ndarray 566 | Corrupted k-space data 567 | ktraj: numpy.ndarray 568 | k-space trajectory 569 | fmap: numpy.ndarray 570 | Field map in Hertz 571 | params: dict 572 | Sequence acquisition parameters 573 | cart_opt: 574 | Cartesian option: 1 (spiral), 'EPI' 575 | 576 | Returns 577 | ------- 578 | M_corr: numpy.ndarray 579 | Corrected image 580 | ''' 581 | df_values = np.unique(fmap) 582 | M_corr = np.zeros((params['N'], params['N']), dtype=complex) 583 | for i in range(len(df_values)): 584 | if method == 'CPR': 585 | M_corr_i = CPR(kdat[...,i], 'raw', ktraj, df_values[i] * np.ones(fmap.shape), cart_opt, params) 586 | elif method == 'fsCPR': 587 | M_corr_i = fs_CPR(kdat[..., i], 'raw', ktraj, df_values[i] * np.ones(fmap.shape), 1, cart_opt, params) 588 | elif method == 'MFI': 589 | M_corr_i = MFI(kdat[..., i], 'raw', ktraj, df_values[i] * np.ones(fmap.shape), 1, cart_opt, params) 590 | 591 | M_corr[np.where(fmap == df_values[i])] = M_corr_i[np.where(fmap == df_values[i])] 592 | return M_corr -------------------------------------------------------------------------------- /OCTOPUS/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/__init__.py -------------------------------------------------------------------------------- /OCTOPUS/fieldmap/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/fieldmap/__init__.py -------------------------------------------------------------------------------- /OCTOPUS/fieldmap/simulate.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Methods to simulate different types of field maps 4 | \nAuthor: Marina Manso Jimeno 5 | \nLast modified: 07/16/2020 6 | ''' 7 | 8 | import numpy as np 9 | import cv2 10 | import math 11 | import matplotlib.pyplot as plt 12 | 13 | import OCTOPUS.ORC as ORC 14 | 15 | def fieldmap_bin(field_map, bin): 16 | ''' 17 | Bins a given field map given a binning value 18 | 19 | Parameters 20 | ---------- 21 | field_map : numpy.ndarray 22 | Field map matrix in Hz 23 | bin : int 24 | Binning value in Hz 25 | 26 | Returns 27 | ------- 28 | binned_field_map : numpy.ndarray 29 | Binned field map matrix 30 | ''' 31 | fmax = field_map.max() 32 | bins = np.arange(-fmax, fmax + bin, bin) 33 | binned_field_map = np.zeros(field_map.shape) 34 | for x in range(field_map.shape[0]): 35 | for y in range(field_map.shape[1]): 36 | idx = ORC.find_nearest(bins, field_map[x, y]) 37 | binned_field_map[x, y] = bins[idx] 38 | 39 | return binned_field_map 40 | 41 | def parabola_formula(N): 42 | """ 43 | Parabola values to fit an image of N rows/columns 44 | 45 | Parameters 46 | ---------- 47 | N : int 48 | Matrix size 49 | 50 | Returns 51 | ------- 52 | yaxis : numpy.ndarray 53 | y axis values of the parabola 54 | """ 55 | x1, y1 = -N / 10, 0.5 56 | x2, y2 = 0, 0 57 | x3, y3 = N / 10, 0.5 58 | 59 | denom = (x1 - x2) * (x1 - x3) * (x2 - x3) 60 | A = (x3 * (y2 - y1) + x2 * (y1 - y3) + x1 * (y3 - y2)) / denom 61 | B = (x3 * x3 * (y1 - y2) + x2 * x2 * (y3 - y1) + x1 * x1 * (y2 - y3)) / denom 62 | C = (x2 * x3 * (x2 - x3) * y1 + x3 * x1 * (x3 - x1) * y2 + x1 * x2 * (x1 - x2) * y3) / denom 63 | 64 | y = lambda x: A * x ** 2 + B * x + C 65 | yaxis = y(np.arange(-N / 2, N / 2, 1)).reshape(1, N) 66 | 67 | return yaxis 68 | 69 | def parabolic(N, fmax, bin_opt = True, bin_val = 5): 70 | """ 71 | Creates a parabolic field map 72 | 73 | Parameters 74 | ---------- 75 | N : int 76 | Field map dimensions (NxN) 77 | fmax : float 78 | Frequency range in Hz 79 | bin_opt : bool 80 | Binning option. Default is True 81 | bin_val : int 82 | Binning value. Default is 5 Hz 83 | 84 | Returns 85 | ------- 86 | field map : numpy.ndarray 87 | Field map matrix with dimensions [NxN] and scaled from -fmax to +fmax Hz 88 | """ 89 | y = parabola_formula(N) 90 | rows = np.tile(y, (N,1)) 91 | field_map_mat = rows + rows.T 92 | dst = np.zeros(field_map_mat.shape) 93 | field_map = cv2.normalize(field_map_mat, dst, -fmax, fmax, cv2.NORM_MINMAX) 94 | if bin_opt: 95 | field_map = fieldmap_bin(field_map, bin_val) 96 | 97 | return field_map 98 | 99 | def hyperbolic(N, fmax, bin_opt = True, bin_val= 5): 100 | """ 101 | Creates a hyperbolic field map 102 | 103 | Parameters 104 | ---------- 105 | N : int 106 | Field map dimensions (NxN) 107 | fmax : float 108 | Frequency range in Hz 109 | bin_opt : bool 110 | Binning option. Default is True 111 | bin_val : int 112 | Binning value, default is 5 Hz 113 | 114 | 115 | Returns 116 | ------- 117 | field map : numpy.ndarray 118 | Field map matrix with dimensions [NxN] and scaled from -fmax to +fmax Hz 119 | """ 120 | y = parabola_formula(N) 121 | rows = np.tile(y, (N, 1)) 122 | field_map_mat = rows - rows.T 123 | dst = np.zeros(field_map_mat.shape) 124 | field_map = cv2.normalize(field_map_mat, dst, -fmax, fmax, cv2.NORM_MINMAX) 125 | if bin_opt: 126 | field_map = fieldmap_bin(field_map, bin_val) 127 | return field_map 128 | 129 | def realistic(im , mask, fmax , bin_opt = True, bin_val = 5): 130 | """ 131 | Creates a realistic field map based on the input image 132 | 133 | Parameters 134 | ---------- 135 | im : numpy.ndarray 136 | Input image 137 | fmax : float 138 | Frequency range in Hz 139 | bin_opt : bool 140 | Binning option. Default is True 141 | bin_val : int 142 | Binning value, default is 5 Hz 143 | 144 | Returns 145 | ------- 146 | field_map : numpy.ndarray 147 | Field map matrix with dimensions same as im and scaled from -fmax to +fmax Hz 148 | """ 149 | if im.shape[0] != im.shape[1]: 150 | raise ValueError('Images have to be squared (NxN)') 151 | 152 | N = im.shape[0] 153 | hist = np.histogram(im) 154 | #mask1= cv2.threshold(im, hist[1][hist[0].argmax()+1], 1, cv2.THRESH_BINARY) 155 | #mask2 = cv2.threshold(im, hist[1][hist[0].argmax()], 1, cv2.THRESH_BINARY_INV) 156 | #mask = mask1[1] #+ mask2[1] 157 | mask = mask 158 | 159 | np.random.seed(123) 160 | M = np.random.rand(2, 2) 161 | M2 = cv2.resize(M, (N, N)) 162 | 163 | dst = np.zeros(M2.shape) 164 | field_map = cv2.normalize(M2, dst, -fmax, fmax, cv2.NORM_MINMAX) * mask 165 | 166 | if bin_opt: 167 | field_map = fieldmap_bin(field_map, bin_val) 168 | 169 | return field_map 170 | 171 | def spherical_order4(N, fmax, bin_opt = True, bin_val= 5): 172 | """ 173 | Creates a field map simulating spherical harmonics of 4th order 174 | 175 | Parameters 176 | ---------- 177 | N : int 178 | Field map dimensions (NxN) 179 | fmax : float 180 | Frequency range in Hz 181 | bin_opt : bool 182 | Binning option. Default is True 183 | bin_val : int 184 | Binning value, default is 5 Hz 185 | 186 | Returns 187 | ------- 188 | field_map : numpy.ndarray 189 | Field map matrix with dimensions NxN and scaled from -fmax to +fmax Hz 190 | """ 191 | offset = 1 #math.sqrt(1 / (4 * math.pi)) 192 | x = np.tile(np.linspace(-offset, offset, N).reshape(N, 1), (1, N)) 193 | l1_X = np.dstack([x] * N) 194 | #plot_views(l1_X, 'X') 195 | 196 | l1_Z = np.ones((N, N, N)) * np.linspace(-offset, offset, N).reshape(1, 1, N) 197 | #plot_views(l1_Z, 'Z') 198 | 199 | y = np.flipud(x.T) 200 | l1_Y = np.dstack([y] * N) 201 | #plot_views(l1_Y, 'Y') 202 | 203 | middle_slice = int(round(N / 2)) - 1 204 | l4_X4 = 105 * (l1_X ** 2 - l1_Y ** 2) ** 2 - 420 * l1_X ** 2 * l1_Y **2 205 | #plot_views(l4_X4, 'X4') 206 | field_map_mat = np.squeeze(l4_X4[:,:,middle_slice]) 207 | 208 | dst = np.zeros(field_map_mat.shape) 209 | field_map = cv2.normalize(field_map_mat, dst, -fmax, fmax, cv2.NORM_MINMAX) 210 | if bin_opt: 211 | field_map = fieldmap_bin(field_map, bin_val) 212 | return field_map 213 | 214 | 215 | '''def plot_views(matrix, fig_title): 216 | 217 | middle_slice = int(round(matrix.shape[0] / 2)) - 1 218 | fig, axes = plt.subplots(nrows=1, ncols=3) 219 | fig.suptitle(fig_title, fontsize=16) 220 | # plt.subplot(131) 221 | axes[0].imshow(np.rot90(matrix[:, :, middle_slice]), vmin=matrix.min(), vmax=matrix.max()) 222 | axes[0].set_title('Axial (XY)') 223 | # plt.subplot(132) 224 | axes[1].imshow(np.rot90(matrix[:, middle_slice, :]), vmin=matrix.min(), vmax=matrix.max()) 225 | axes[1].set_title('Coronal (XZ)') 226 | # plt.subplot(133) 227 | axes[2].imshow(np.rot90(matrix[middle_slice, :, :]), vmin=matrix.min(), vmax=matrix.max()) 228 | axes[2].set_title('Sagittal (YZ)') 229 | 230 | # plt.colorbar() 231 | plt.show()''' -------------------------------------------------------------------------------- /OCTOPUS/fieldmap/unwrap.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Prepares field map data (from a Siemens scanner) for phase unwrapping with FSL 4 | \nAuthor: Marina Manso Jimeno 5 | \nLast updated: 07/08/2020 6 | ''' 7 | import os 8 | import scipy.io as sio 9 | import numpy as np 10 | import nibabel as nib 11 | import matplotlib.pyplot as plt 12 | 13 | from pydicom import dcmread 14 | 15 | from OCTOPUS.utils.dataio import get_data_from_file, read_dicom 16 | from OCTOPUS.recon.rawdata_recon import mask_by_threshold 17 | 18 | def fsl_prep(data_path_raw, data_path_dicom, dst_folder, dTE): 19 | ''' 20 | Prepares a Siemens fieldmap for phase unwrapping using FSL. Saves a phase difference image and a magnitude image with 21 | the ROI extracted in niftii format to use as inputs for FSL. 22 | 23 | Parameters 24 | ---------- 25 | data_path_raw : str 26 | Path to the MATLAB/npy file containing the field map raw data with dimensions [lines, columns, echoes, channels] 27 | data_path_dicom : str 28 | Path to the DICOM file containing the phase difference image (.IMA) 29 | dst_folder : str 30 | Path to the destination folder where the niftii files are saved 31 | dTE : float 32 | Difference in TE between the two echoes in seconds 33 | ''' 34 | b0_map = get_data_from_file(data_path_raw) 35 | '''if data_path_raw[-3:] == 'mat': 36 | b0_map = sio.loadmat(data_path_raw)['b0_map'] # field map raw data 37 | elif data_path_raw[-3:] == 'npy': 38 | b0_map = np.load(data_path_raw) 39 | else: 40 | raise ValueError('File format not supported, please input a .mat or .npy file')''' 41 | 42 | 43 | ## 44 | # Acq parameters 45 | ## 46 | N = b0_map.shape[1] # Matrix Size 47 | Nchannels = b0_map.shape[-1] 48 | 49 | if len(b0_map.shape) < 5: 50 | Nslices = 1 51 | b0_map = b0_map.reshape(b0_map.shape[0], N, 1, 2, Nchannels) 52 | else: 53 | 54 | Nslices = b0_map.shape[-3] 55 | sl_order = np.zeros((Nslices)) 56 | sl_order[range(0,Nslices,2)] = range(int(Nslices/2), Nslices) 57 | sl_order[range(1,Nslices,2)] = range(0, int(Nslices/2)) 58 | sl_order = sl_order.astype(int) 59 | 60 | ## 61 | # FT to get the echo complex images 62 | ## 63 | 64 | echo1 = np.zeros((N * 2, N, Nslices, Nchannels), dtype=complex) 65 | echo2 = np.zeros((N * 2, N, Nslices, Nchannels), dtype=complex) 66 | for ch in range(Nchannels): 67 | for sl in range(Nslices): 68 | echo1[:, :, sl, ch] = np.fft.fftshift(np.fft.ifft2(b0_map[:, :, sl, 0, ch])) 69 | echo2[:, :, sl, ch] = np.fft.fftshift(np.fft.ifft2(b0_map[:, :, sl, 1, ch])) 70 | 71 | # Crop the lines from oversampling factor of 2 72 | oversamp_factor = int(b0_map.shape[0] / 4) 73 | echo1 = echo1[oversamp_factor:-oversamp_factor, :, :, :] 74 | echo2 = echo2[oversamp_factor:-oversamp_factor, :, :, :] 75 | 76 | echo1 = echo1[:,:,sl_order,:] 77 | # Magnitude image with object 'brain' extracted 78 | mag_im = np.sqrt(np.sum(np.abs(echo1) ** 2, -1)) 79 | mag_im = np.divide(mag_im, np.max(mag_im)) 80 | 81 | 82 | '''im = np.zeros((128, 128 * 20)) 83 | for i in range(20): 84 | im[:, int(128 * i):int(128 * (i + 1))] = mag_im[:, :, i] 85 | plt.imshow(im) 86 | plt.show()''' 87 | 88 | 89 | 90 | brain_mask = mask_by_threshold(mag_im) 91 | brain_extracted = np.squeeze(mag_im) * brain_mask 92 | 93 | # for i in range(Nslices): 94 | # plt.imshow(brain_extracted[:,:,i]) 95 | # plt.show() 96 | 97 | img = nib.Nifti1Image(brain_extracted, np.eye(4)) 98 | nib.save(img, os.path.join(dst_folder, 'mag_vol_extracted.nii.gz')) 99 | 100 | # Phase difference image from DICOM data 101 | 102 | if os.path.isdir(data_path_dicom) or data_path_dicom[-4:] == '.dcm' or data_path_dicom[-4:] == '.IMA': 103 | vol = read_dicom(data_path_dicom) 104 | vol = np.rot90(vol[...,sl_order], -1) 105 | else: 106 | vol = get_data_from_file(data_path_dicom) 107 | # Save as niftii 108 | 109 | # for i in range(Nslices): 110 | # plt.imshow(vol[:,:,i]) 111 | # plt.show() 112 | 113 | for i in range(Nslices): 114 | plt.subplot(1,2,1) 115 | plt.imshow(brain_extracted[...,i]) 116 | plt.subplot(1,2,2) 117 | plt.imshow(vol[:,:,i]) 118 | plt.show() 119 | img = nib.Nifti1Image(vol, np.eye(4)) 120 | nib.save(img, os.path.join(dst_folder, 'phase_diff.nii.gz')) 121 | -------------------------------------------------------------------------------- /OCTOPUS/prog.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import sys 3 | import os 4 | 5 | import numpy as np 6 | from tqdm import tqdm 7 | from pathlib import Path 8 | import math 9 | 10 | path_search = str(Path(__file__).parent.parent) 11 | sys.path.insert(0, path_search) 12 | 13 | from OCTOPUS.utils.dataio import get_data_from_file 14 | import OCTOPUS.ORC as ORC 15 | 16 | def main(): 17 | 18 | parser = argparse.ArgumentParser() 19 | parser.add_argument("path", help="Path containing the input data") 20 | parser.add_argument("datain", help="Corrupted data with dimensions [Npoints OR N(Cartesian), Nshots OR N(Cartesian), Nslices, Nchannels]") 21 | parser.add_argument("ktraj", help="K-space trajectory: [Npoints OR N(Cartesian), Nshots OR N(Cartesian)]") 22 | parser.add_argument("fmap", help="Field map in rad/s") 23 | parser.add_argument("ORCmethod", choices=['CPR', 'fsCPR', 'MFI'], help="Correction method") 24 | parser.add_argument("--cart", default=0, choices=[0, 1], dest="cart_opt", help="0: non-cartesian data, 1: cartesian data") 25 | parser.add_argument('--Lx', default=2, dest='Lx', help="L(frequency bins) factor with respect to minimum L") 26 | parser.add_argument('--grad_raster', default=10e-6, dest='grad_raster', help="Gradient raster time") 27 | parser.add_argument('--TE', default=0, dest='TE', help="Echo time") 28 | parser.add_argument('--dcf', default=None, dest='dcf', help="Density compensation factor for non-cartesian trajectories") 29 | 30 | 31 | args = parser.parse_args() 32 | data_in = get_data_from_file(os.path.join(args.path, args.datain)) 33 | ktraj = get_data_from_file(os.path.join(args.path, args.ktraj)) 34 | fmap = get_data_from_file(os.path.join(args.path, args.fmap))/ (2 * math.pi) 35 | 36 | ## Dimensions check 37 | print('Checking dimensions...') 38 | if data_in.shape[0] != ktraj.shape[0] or data_in.shape[1] != ktraj.shape[1]: 39 | raise ValueError('The raw data does not agree with the k-space trajectory') 40 | if fmap.shape[0] != fmap.shape[1]: 41 | raise ValueError('Image and field map should have square dimensions (NxN)') 42 | if len(fmap.shape) > 2: 43 | if fmap.shape[-1] != data_in.shape[2]: 44 | raise ValueError('The field map dimensions do not agree with the raw data') 45 | if len(fmap.shape) == 2: 46 | fmap = np.expand_dims(fmap,-1) 47 | if data_in.shape[2] != 1: 48 | raise ValueError('The field map dimensions do not agree with the raw data') 49 | 50 | print('OK') 51 | 52 | ## Sequence parameters 53 | N = fmap.shape[0] 54 | Npoints = data_in.shape[0] 55 | Nshots = data_in.shape[1] 56 | Nslices = data_in.shape[-2] 57 | Nchannels = data_in.shape[-1] 58 | 59 | t_ro = Npoints * args.grad_raster 60 | T = np.linspace(args.TE, args.TE + t_ro, Npoints).reshape((Npoints, 1)) 61 | if args.cart_opt == 0: 62 | seq_params = {'N': N, 'Npoints': Npoints, 'Nshots': Nshots, 't_vector': T, 't_readout': t_ro} 63 | 64 | if args.dcf is not None: 65 | dcf = get_data_from_file(os.path.join(args.path, args.dcf)).flatten() 66 | seq_params.update({'dcf': dcf}) 67 | 68 | ORC_result = np.zeros((N, N, Nslices, Nchannels), dtype=complex) 69 | 70 | 71 | if args.ORCmethod == 'MFI': 72 | for ch in tqdm(range(Nchannels)): 73 | for sl in range(Nslices): 74 | if args.cart_opt == 0: 75 | ORC_result[:, :, sl, ch] = ORC.MFI(np.squeeze(data_in[:, :, sl, ch]), 'raw', ktraj, np.squeeze(fmap[:, :, sl]), args.Lx, 1, seq_params) 76 | elif args.cart_opt == 1: 77 | ORC_result[:, :, sl, ch] = ORC.MFI(np.squeeze(data_in[:, :, sl, ch]), 'raw', ktraj, 78 | np.squeeze(fmap[:, :, sl]), args.Lx) 79 | 80 | elif args.ORCmethod == 'fsCPR': 81 | for ch in tqdm(range(Nchannels)): 82 | for sl in range(Nslices): 83 | if args.cart == 0: 84 | ORC_result[:, :, sl, ch] = ORC.fs_CPR(np.squeeze(data_in[:, :, sl, ch]), 'raw', ktraj, 85 | np.squeeze(fmap[:, :, sl]), args.Lx, 1, seq_params) 86 | elif args.cart == 1: 87 | ORC_result[:, :, sl, ch] = ORC.fs_CPR(np.squeeze(data_in[:, :, sl, ch]), 'raw', ktraj, 88 | np.squeeze(fmap[:, :, sl]), args.Lx) 89 | elif args.ORCmethod == 'CPR': 90 | for ch in tqdm(range(Nchannels)): 91 | for sl in range(Nslices): 92 | if args.cart == 0: 93 | ORC_result[:, :, sl, ch] = ORC.CPR(np.squeeze(data_in[:, :, sl, ch]), 'raw', ktraj, 94 | np.squeeze(fmap[:, :, sl]), 1, seq_params) 95 | elif args.cart == 1: 96 | ORC_result[:, :, sl, ch] = ORC.CPR(np.squeeze(data_in[:, :, sl, ch]), 'raw', ktraj, 97 | np.squeeze(fmap[:, :, sl])) 98 | 99 | 100 | np.save(os.path.join(args.path, 'ORC_result_' + args.ORCmethod), ORC_result) 101 | 102 | if __name__ == '__main__': 103 | main() 104 | -------------------------------------------------------------------------------- /OCTOPUS/recon/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/recon/__init__.py -------------------------------------------------------------------------------- /OCTOPUS/recon/imtransforms.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Methods to do k-space to image and image to k-space for both Cartesian and non-cartesian data 4 | \nAuthor: Marina Manso Jimeno 5 | \nLast updated: 10/07/2020 6 | ''' 7 | 8 | import numpy.fft as npfft 9 | import numpy as np 10 | import pynufft 11 | 12 | from math import pi 13 | 14 | def im2ksp(M, cartesian_opt, NufftObj=None, params=None): 15 | '''Image to k-space transformation 16 | 17 | Parameters 18 | ---------- 19 | M : numpy.ndarray 20 | Image data 21 | cartesian_opt : int 22 | Cartesian = 1, Non-Cartesian = 0. 23 | NufftObj : pynufft.linalg.nufft_cpu.NUFFT_cpu 24 | Non-uniform FFT Object for non-cartesian transformation. Default is None. 25 | params : dict 26 | Sequence parameters. Default is None. 27 | 28 | Returns 29 | ------- 30 | kspace : numpy.ndarray 31 | k-space data 32 | ''' 33 | 34 | if cartesian_opt == 1: 35 | 36 | kspace = npfft.fftshift(npfft.fft2(M)) 37 | elif cartesian_opt == 0: 38 | # Sample phantom along ktraj 39 | if 'Npoints' not in params: 40 | raise ValueError('The number of acquisition points is missing') 41 | if 'Nshots' not in params: 42 | raise ValueError('The number of shots is missing') 43 | kspace = NufftObj.forward(M).reshape((params['Npoints'], params['Nshots'])) # sampled image 44 | else: 45 | raise ValueError('Cartesian option should be either 0 or 1') 46 | return kspace 47 | 48 | def ksp2im(ksp, cartesian_opt, NufftObj=None, params=None): 49 | '''K-space to image transformation 50 | 51 | Parameters 52 | ---------- 53 | ksp : numpy.ndarray 54 | K-space data 55 | cartesian_opt : int 56 | Cartesian = 1, Non-Cartesian = 0. 57 | NufftObj : pynufft.linalg.nufft_cpu.NUFFT_cpu 58 | Non-uniform FFT Object for non-cartesian transformation. Default is None. 59 | params : dict 60 | Sequence parameters. Default is None. 61 | 62 | Returns 63 | ------- 64 | im : numpy.ndarray 65 | Image data 66 | ''' 67 | 68 | if cartesian_opt == 1: 69 | im = npfft.ifft2(npfft.fftshift(ksp)) 70 | #im = npfft.ifftshift(npfft.ifft2(ksp)) 71 | 72 | elif cartesian_opt == 0: 73 | 74 | if 'Npoints' not in params: 75 | raise ValueError('The number of acquisition points is missing') 76 | if 'Nshots' not in params: 77 | raise ValueError('The number of shots is missing') 78 | 79 | if 'dcf' in params: 80 | ksp_dcf = ksp.reshape((params['Npoints']*params['Nshots'],))*params['dcf'] 81 | im = NufftObj.adjoint(ksp_dcf) # * np.prod(sqrt(4 * params['N'] ** 2)) 82 | else: 83 | 84 | ksp_dcf = ksp.reshape((params['Npoints'] * params['Nshots'],)) 85 | im = NufftObj.solve(ksp_dcf, solver='cg', maxiter=50) 86 | #im = NufftObj.solve(ksp_dcf, solver='L1TVOLS', maxiter=50, rho=0.1) 87 | 88 | 89 | 90 | 91 | 92 | 93 | else: 94 | raise ValueError('Cartesian option should be either 0 or 1') 95 | 96 | return im 97 | 98 | def nufft_init(kt, params): 99 | '''Initializes the Non-uniform FFT object 100 | 101 | Parameters 102 | ---------- 103 | kt : numpy.ndarray 104 | K-space trajectory 105 | params : dict 106 | Sequence parameters. 107 | 108 | Returns 109 | ------- 110 | NufftObj : pynufft.linalg.nufft_cpu.NUFFT_cpu 111 | Non-uniform FFT Object for non-cartesian transformation 112 | ''' 113 | if 'Npoints' not in params: 114 | raise ValueError('The number of acquisition points is missing') 115 | if 'Nshots' not in params: 116 | raise ValueError('The number of shots is missing') 117 | if 'N' not in params: 118 | raise ValueError('The matrix size is missing') 119 | 120 | kt_sc = pi / abs(np.max(kt)) 121 | kt = kt * kt_sc # pyNUFFT scaling [-pi, pi] 122 | om = np.zeros((params['Npoints'] * params['Nshots'], 2)) 123 | 124 | if kt.dtype == 'complex128': 125 | om[:, 0] = np.real(kt).flatten() 126 | om[:, 1] = np.imag(kt).flatten() 127 | else: 128 | om[:, 0] = kt[..., 0].flatten() 129 | om[:, 1] = kt[..., 1].flatten() 130 | 131 | # om = np.zeros((params['Npoints'] * params['Nshots'], 2)) 132 | # om[:, 0] = np.real(kt).flatten() 133 | # om[:, 1] = np.imag(kt).flatten() 134 | 135 | NufftObj = pynufft.NUFFT_cpu() # Create a pynufft object 136 | Nd = (params['N'], params['N']) # image size 137 | Kd = (2 * params['N'], 2 * params['N']) # k-space size 138 | Jd = (6, 6) # interpolation size 139 | 140 | NufftObj.plan(om, Nd, Kd, Jd) 141 | return NufftObj -------------------------------------------------------------------------------- /OCTOPUS/recon/rawdata_recon.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Methods to reconstruct a field map and spiral images from raw data. 4 | \nAuthor: Marina Manso Jimeno 5 | \nLast modified: 07/16/2020 6 | ''' 7 | import scipy.io as sio 8 | import numpy as np 9 | import matplotlib.pyplot as plt 10 | import numpy.fft as fft 11 | import math 12 | 13 | from OCTOPUS.utils.dataio import get_data_from_file 14 | from OCTOPUS.recon.imtransforms import nufft_init, ksp2im 15 | 16 | def mask_by_threshold(im): 17 | ''' 18 | Masks a magnitude image by thresholding according to: 19 | Jenkinson M. (2003). Fast, automated, N-dimensional phase-unwrapping algorithm. Magnetic resonance in medicine, 49(1), 193–197. https://doi.org/10.1002/mrm.10354 20 | 21 | Parameters 22 | ---------- 23 | im : np.ndarray 24 | Magnitude image 25 | 26 | Returns 27 | ------- 28 | mask : np.ndarray 29 | Binary image for background segmentation 30 | ''' 31 | im = np.squeeze(im) 32 | mask = np.ones(im.shape) 33 | threshold = 0.1 * (np.percentile(im, 98) - np.percentile(im, 2)) + np.percentile(im, 2) 34 | mask[np.where(im < threshold)] = 0 35 | return mask 36 | 37 | def hermitian_product(echo1, echo2, dTE): 38 | ''' 39 | Calculates the phase difference and frequency map given two echo data and deltaTE between them. 40 | Channels are combined using the hermitian product as described by: 41 | Robinson, S., & Jovicich, J. (2011). B0 mapping with multi-channel RF coils at high field. Magnetic resonance in medicine, 66(4), 976–988. https://doi.org/10.1002/mrm.22879 42 | 43 | Parameters 44 | ---------- 45 | echo1 : np.ndarray 46 | Complex image corresponding to the first echo with dimensions [N, N, Nslices, Nchannels] 47 | echo2 : np.ndarray 48 | Complex image corresponding to the second echo with same dimensions as echo1 49 | dTE : float 50 | TE difference between the two echos in seconds. 51 | 52 | Returns 53 | ------- 54 | fmap : np.ndarray 55 | Frequency map in Hz with dimensions [N, N, Nslices] 56 | ''' 57 | 58 | delta_theta = np.angle(np.sum(echo2 * np.conjugate(echo1), axis=-1)) 59 | fmap = - delta_theta / (dTE * 2 * math.pi) 60 | 61 | return fmap 62 | 63 | def separate_channels(echo1, echo2, dTE): 64 | ''' 65 | Calculates the phase difference and frequency map given two echo data and deltaTE between them. 66 | Channels are combined using a trimmed average method inspired by the separate channels method described by: 67 | Robinson, S., & Jovicich, J. (2011). B0 mapping with multi-channel RF coils at high field. Magnetic resonance in medicine, 66(4), 976–988. https://doi.org/10.1002/mrm.22879 68 | 69 | Parameters 70 | ---------- 71 | echo1 : np.ndarray 72 | Complex image corresponding to the first echo with dimensions [N, N, Nslices, Nchannels] 73 | echo2 : np.ndarray 74 | Complex image corresponding to the second echo with same dimensions as echo1 75 | dTE : float 76 | TE difference between the two echos in seconds. 77 | 78 | Returns 79 | ------- 80 | fmap_chcomb : np.ndarray 81 | Frequency map in Hz with dimensions [N, N, Nslices] 82 | ''' 83 | echo1_ph = np.angle(echo1) 84 | echo2_ph = np.angle(echo2) 85 | 86 | delta_theta = echo2_ph - echo1_ph 87 | fmap = - delta_theta / (2 * math.pi * dTE) 88 | fmap_chcomb = np.zeros(fmap.shape[:-1]) 89 | for sl in range(fmap.shape[-2]): 90 | for i in range(fmap.shape[0]): 91 | for j in range(fmap.shape[1]): 92 | voxel_vals = fmap[i,j,sl,:] 93 | lowest_quartile = np.quantile(voxel_vals, 0.25) 94 | highest_quartile = np.quantile(voxel_vals, 0.75) 95 | trimmed_vals = [val for val in voxel_vals if val > lowest_quartile and val < highest_quartile] 96 | fmap_chcomb[i,j,sl] = np.mean(np.asarray(trimmed_vals)) 97 | 98 | return fmap_chcomb 99 | 100 | def fmap_recon(data, dTE, method = 'HP', save = 0, plot = 0, dst_folder = None): 101 | ''' 102 | Frequency map reconstruction from dual echo raw data 103 | 104 | Parameters 105 | ---------- 106 | data : str or np.ndarray 107 | Path containing the field map file or array containing the field map data 108 | dTE : float 109 | Difference in TE between the two echoes in seconds 110 | method : str 111 | Method for channel combination. Options are 'HP' or 'SC'. Default is 'HP'. 112 | plot : bool, Optional 113 | Plotting a slice of the reconstructed frequency map option. Default is 0 (not plot). 114 | save : bool, Optional 115 | Saving the data in a .npy file option. Default is 0 (not save). 116 | dst_folder : str, Optional 117 | Path to the folder where the reconstructed field map is saved. Default is None. 118 | 119 | Returns 120 | ------- 121 | fmap : np.ndarray 122 | Frequency map in Hz 123 | ''' 124 | ## 125 | # Load the raw data 126 | ## 127 | if isinstance(data, str): 128 | f_map = get_data_from_file(data) 129 | elif isinstance(data, np.ndarray): 130 | f_map = data 131 | 132 | ## 133 | # Acq parameters 134 | ## 135 | N = f_map.shape[1] # Matrix Size 136 | Nchannels = f_map.shape[-1] 137 | 138 | if len(f_map.shape) < 5: 139 | Nslices = 1 140 | f_map = f_map.reshape(f_map.shape[0], N, 1, 2, Nchannels) 141 | else: 142 | 143 | Nslices = f_map.shape[2] 144 | 145 | ## 146 | # FT to get the echo complex images 147 | ## 148 | 149 | echo1 = np.zeros((N * 2, N, Nslices, Nchannels), dtype=complex) 150 | echo2 = np.zeros((N * 2, N, Nslices, Nchannels), dtype=complex) 151 | for ch in range(Nchannels): 152 | for sl in range(Nslices): 153 | echo1[:, :, sl, ch] = fft.fftshift(fft.ifft2(f_map[:, :, sl, 0, ch])) 154 | echo2[:, :, sl, ch] = fft.fftshift(fft.ifft2(f_map[:, :, sl, 1, ch])) 155 | 156 | # Crop the lines from oversampling factor of 2 157 | oversamp_factor = int(f_map.shape[0] / 4) 158 | echo1 = echo1[oversamp_factor:-oversamp_factor, :, :, :] 159 | echo2 = echo2[oversamp_factor:-oversamp_factor, :, :, :] 160 | 161 | ## 162 | # Calculate the field map 163 | ## 164 | if method == 'HP': 165 | fmap = hermitian_product(echo1, echo2, dTE) 166 | elif method == 'SC': 167 | fmap = separate_channels(echo1, echo2, dTE) 168 | else: 169 | raise ValueError('The method you specified is not supported') 170 | 171 | fmap = np.fliplr(fmap) 172 | if save: 173 | if dst_folder is None: 174 | raise ValueError('Please specify destination folder') 175 | np.save(dst_folder + 'fmap', fmap) 176 | 177 | if plot: 178 | mid = math.floor(fmap.shape[-1]/2) 179 | plt.imshow(np.rot90(fmap[:, :, mid], -1), cmap='gray') 180 | #plt.imshow(np.rot90(np.fliplr(b0map[:,:,0]), -1), cmap='gray') # rotate only for display purposes 181 | plt.axis('off') 182 | plt.title('Field Map') 183 | plt.colorbar() 184 | plt.show() 185 | 186 | return fmap 187 | 188 | def spiral_recon(data, ktraj, N, plot = 0, save = 0, dst_folder = None): 189 | ''' 190 | Spiral image reconstruction from raw data 191 | 192 | Parameters 193 | ---------- 194 | data : str or np.ndarray 195 | Path containing the raw data file of array containing the raw data 196 | ktraj : np.ndarray 197 | k-space trajectory coordinates with dimensions [Npoints, Nshots] 198 | N : int 199 | Matrix size of the reconstructed image 200 | plot : bool, Optional 201 | Plotting a slice of the reconstructed image option. Default is 0 (not plot). 202 | save : bool, Optional 203 | Saving the data in a .npy file option. Default is 0 (not save). 204 | dst_folder : str, Optional 205 | Path to the folder where the reconstructed image is saved. Default is None. 206 | ''' 207 | 208 | ## 209 | # Load the raw data 210 | ## 211 | if isinstance(data, str): 212 | dat = get_data_from_file(data) 213 | elif isinstance(data, np.ndarray): 214 | dat = data 215 | ## 216 | # Acq parameters 217 | ## 218 | Npoints = ktraj.shape[0] 219 | Nshots = ktraj.shape[1] 220 | Nchannels = dat.shape[-1] 221 | 222 | if len(dat.shape) < 4: 223 | Nslices = 1 224 | dat = dat.reshape(Npoints, Nshots, 1, Nchannels) 225 | else: 226 | Nslices = dat.shape[-2] 227 | 228 | 229 | if dat.shape[0] != ktraj.shape[0] or dat.shape[1] != ktraj.shape[1]: 230 | raise ValueError('Raw data and k-space trajectory do not match!') 231 | 232 | 233 | ## 234 | # Recon 235 | ## 236 | NUFFT_object = nufft_init(ktraj, {'N': N, 'Npoints': Npoints, 'Nshots': Nshots}) 237 | im = np.zeros((N, N, Nslices, Nchannels), dtype=complex) 238 | for ch in range(Nchannels): 239 | for sl in range(Nslices): 240 | im[:,:,sl,ch] = ksp2im(dat[:,:,sl,ch], 0, NUFFT_object, {'N': N, 'Npoints': Npoints, 'Nshots': Nshots})#NufftObj.solve(dat[:,:,sl,ch].flatten(), solver='cg', maxiter=50) 241 | 242 | 243 | #sos = np.sum(np.abs(im), -1) 244 | sos = np.sqrt(np.sum(np.abs(im) ** 2, -1)) 245 | sos = np.divide(sos, np.max(sos)) 246 | 247 | if plot: 248 | #plt.imshow(np.rot90(np.abs(sos[:,:,0]),-1), cmap='gray') 249 | plt.imshow(sos[:, :, 0], cmap='gray') 250 | plt.axis('off') 251 | plt.title('Uncorrected Image') 252 | plt.show() 253 | 254 | if save: 255 | if dst_folder is None: 256 | raise ValueError('Please specify destination folder') 257 | np.save(dst_folder + 'uncorrected_spiral.npy', sos) 258 | return sos -------------------------------------------------------------------------------- /OCTOPUS/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/utils/__init__.py -------------------------------------------------------------------------------- /OCTOPUS/utils/dataio.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Author: Marina Manso Jimeno 4 | Last modified: 07/21/2020 5 | ''' 6 | import numpy as np 7 | import scipy.io as sio 8 | import nibabel as nib 9 | import os 10 | 11 | def get_data_from_file(input): 12 | ''' 13 | Get the data of a file given a string or a dictionary independently of the data format 14 | 15 | Parameters 16 | ---------- 17 | input : str or dict 18 | Input path or dictionary in case of user upload (Colab) 19 | 20 | Returns 21 | ------- 22 | file_data : np.ndarray 23 | Data extracted from the file 24 | ''' 25 | if isinstance(input, str): 26 | file_name = input 27 | elif isinstance(input, dict): 28 | file_name = next(iter(input)) 29 | else: #pathlib 30 | file_name = input._str 31 | file_format = file_name[file_name.find('.'):] 32 | if file_format == '.mat': 33 | file_data_dict = sio.loadmat(file_name) 34 | file_data = file_data_dict[list(file_data_dict.keys())[-1]] 35 | elif file_format == '.npy': 36 | file_data = np.load(file_name) 37 | elif file_format == '.nii.gz' or file_format == '.nii': 38 | file_data = nib.load(file_name).get_fdata() 39 | else: 40 | raise ValueError('Sorry, this file format is not supported:' + (file_format)) 41 | 42 | return file_data 43 | 44 | from pydicom import dcmread 45 | 46 | 47 | def read_dicom(path): 48 | ''' 49 | Reads dicom file from path 50 | 51 | Parameters 52 | ---------- 53 | path : str 54 | Path of the file/dicom folder 55 | 56 | Returns 57 | ------- 58 | vol : np.ndarray 59 | Array containing the image volume 60 | ''' 61 | vol = [] 62 | if os.path.isdir(path): 63 | for files_in in os.listdir(path): 64 | vol.append(dcmread(os.path.join(path, files_in)).pixel_array) 65 | vol = np.moveaxis(np.stack(vol), 0, -1) 66 | elif os.path.isfile(path): 67 | data = dcmread(path) 68 | # Get the image data 69 | vol = data.pixel_array 70 | return vol 71 | 72 | 73 | -------------------------------------------------------------------------------- /OCTOPUS/utils/log_kernel.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUS/utils/log_kernel.mat -------------------------------------------------------------------------------- /OCTOPUS/utils/metrics.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Methods to calculate off-resonance correction performance metrics 4 | \nAuthor: Marina Manso Jimeno 5 | \nLast updated: 06/03/2020 6 | ''' 7 | 8 | import scipy.io as sio 9 | import cv2 10 | 11 | from skimage.metrics import peak_signal_noise_ratio as pSNR 12 | from skimage.metrics import structural_similarity as SSIM 13 | 14 | 15 | import numpy as np 16 | import matplotlib.pyplot as plt 17 | 18 | from skimage import data, img_as_float 19 | 20 | def LoG(im): 21 | ''' 22 | Laplacian of a Gaussian implementation with kernel size (15, 15) and sigma equal to 1.5 pixels 23 | 24 | Parameters 25 | ---------- 26 | im : numpy.ndarray 27 | Input image 28 | 29 | Returns 30 | ------- 31 | log_im : numpy.ndarray 32 | Laplacian of a gaussian of the image 33 | ''' 34 | # rotationally symmetric kernel 35 | log_kernel = sio.loadmat('../utils/log_kernel.mat')['log_kernel'] 36 | log_im = cv2.filter2D(np.abs(im), -1, log_kernel) 37 | 38 | return log_im 39 | 40 | def HFEN(im1, im2): 41 | ''' 42 | High Frequency Error Norm calculation for two images 43 | 44 | Parameters 45 | ---------- 46 | im1 : nupy.ndarray 47 | Image 1. Reference image, same shape as im2 48 | im2 : numpy.ndarray 49 | Image 2 50 | 51 | Returns 52 | ------- 53 | hfen : float 54 | Measured HFEN value 55 | ''' 56 | log1 = LoG(im1) 57 | log2 = LoG(im2) 58 | 59 | hfen = np.linalg.norm((log1 - log2), 2) / np.linalg.norm(log2, 2) 60 | 61 | return hfen 62 | 63 | def create_table(stack_of_images, col_names, franges): 64 | ''' 65 | Displays a table with the metrics for images corrected using the different ORC methods 66 | 67 | Parameters 68 | ---------- 69 | stack_of_images : numpy.ndarray 70 | stack_of_images[0] : ground truth. stack_of_images[1] : CPR corrected. stack_of_images[2] : fs-CPR corrected. stack_of_images[3] : MFI corrected. 71 | col_names : tuple 72 | Names for the columns 73 | franges : tuple 74 | Frequency ranges of the original field map 75 | ''' 76 | 77 | nmetrics = 2 #3 78 | nmethods = len(stack_of_images) - 1 79 | 80 | if len(col_names) != nmethods: 81 | raise ValueError('Number of methods does not match the number of columns for the table') 82 | if isinstance(franges, tuple): 83 | nfranges = len(franges) 84 | else: 85 | franges = [franges] 86 | nfranges = 1 87 | stack_of_images = np.expand_dims(stack_of_images, axis=3) 88 | 89 | data = np.zeros((nfranges, nmetrics, nmethods)) 90 | 91 | for fr in range(nfranges): 92 | ims_fr = np.squeeze(stack_of_images[:,:,:,fr]) 93 | GT = ims_fr[0] 94 | for col in range(nmethods): 95 | corr_im = ims_fr[col + 1] 96 | data[fr, 0, col] = pSNR(GT, corr_im, data_range=corr_im.max() - corr_im.min()) 97 | data[fr, 1, col] = SSIM(GT, corr_im, data_range=corr_im.max() - corr_im.min()) 98 | #data[fr, 2, col] = HFEN(GT, corr_im) 99 | 100 | # Get some pastel shades for the colors 101 | colors = plt.cm.BuPu(np.linspace(0.25, 0.7, nfranges)) 102 | labels = ['pSNR', 'SSIM', 'HFEN'] 103 | fig, ax = plt.subplots(nmetrics, 1, sharex=True) 104 | x = range(nmethods) 105 | plt.xticks(x, col_names) 106 | 107 | for fr in range(nfranges): 108 | data_fr = np.squeeze(data[fr,:,:]) 109 | for row in range(nmetrics): 110 | ax[row].plot(x, data_fr[row, :], 'o-', color=colors[fr], label=franges[fr] if row == 0 else "") 111 | ax[row].set_ylabel(labels[row]) 112 | if row != nmetrics -1: 113 | ax[row].xaxis.set_visible(False) 114 | fig.legend() 115 | 116 | if nfranges == 1: 117 | table_data = np.round(data[0],2) 118 | row_labels = labels 119 | row_colors = colors 120 | else: 121 | 122 | #col_names= list(col_names) 123 | #col_names.insert(0, 'Freq range') 124 | table_data = np.zeros((nfranges * nmetrics, nmethods )).tolist() 125 | row_count = 0 126 | row_labels = [] 127 | row_colors = [] 128 | for metric in range(nmetrics): 129 | for fr in range(nfranges): 130 | row_labels.append(franges[fr]) 131 | row_colors.append(colors[fr]) 132 | table_data[row_count][:] = np.round(data[fr, metric, :],2) 133 | row_count += 1 134 | 135 | the_table = plt.table(cellText = table_data, 136 | cellLoc= 'center', 137 | 138 | rowLabels=row_labels, 139 | rowColours=row_colors, 140 | colLabels=col_names, 141 | loc='bottom') 142 | 143 | h = the_table.get_celld()[(0, 0)].get_height() 144 | w = the_table.get_celld()[(0, 0)].get_width() / (nmethods + 1) 145 | header = [the_table.add_cell(pos, -2, w, h, loc="center", facecolor="none") for pos in range(1, nfranges * nmetrics + 1)] 146 | count = 0 147 | for i in range(0, nfranges * nmetrics, nmetrics +1): 148 | header[i].visible_edges = "TLR" 149 | header[i+1].visible_edges = "LR" 150 | header[i+2].visible_edges = "LR" 151 | header[i + 1].get_text().set_text(labels[count]) 152 | count += 1 153 | header[-1].visible_edges = 'BLR' 154 | 155 | the_table._bbox = [0, -2, 1, 1.7] 156 | the_table.auto_set_font_size(False) 157 | the_table.set_fontsize(10) 158 | the_table.scale(1.5, 1.5) 159 | plt.subplots_adjust(bottom=0.45, left=0.2) 160 | 161 | fig.suptitle('Performance metrics') 162 | plt.show() 163 | -------------------------------------------------------------------------------- /OCTOPUS/utils/plotting.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | ''' 3 | Methods to plot the resulting images from off-resonance correction 4 | \nAuthor: Marina Manso Jimeno 5 | \nLast updated: 06/03/2020 6 | ''' 7 | import numpy as np 8 | import matplotlib.pyplot as plt 9 | 10 | from mpl_toolkits.axes_grid1 import ImageGrid 11 | 12 | def plot_correction_results(im_stack, col_names, row_names): 13 | ''' 14 | Creates a plot with the resulting correction images in a grid 15 | 16 | Parameters 17 | ---------- 18 | im_stack : numpy.ndarray 19 | Stack of images. [0] Corrupted images, [1]-[len(im_stack] corrected images using the different methods 20 | col_names : tuple 21 | Titles for the columns of the plot. Correction methods. 22 | row_names : tuple 23 | Titles for the rows of the plot. Off-resonance frequency ranges. 24 | ''' 25 | if len(row_names) == 1: 26 | im_stack = np.expand_dims(im_stack, axis=-1) 27 | 28 | nrows = im_stack.shape[-1] 29 | ncols = im_stack.shape[0] 30 | 31 | im_list = [] 32 | for frange in range(nrows): 33 | for method in range(ncols): 34 | im_list.append(np.abs(np.squeeze(im_stack[method, :, :, frange]))) 35 | 36 | fig = plt.figure() 37 | grid = ImageGrid(fig, 111, # similar to subplot(111) 38 | nrows_ncols=(nrows, ncols), # creates 2x2 grid of axes 39 | axes_pad=0, # pad between axes in inch. 40 | ) 41 | 42 | for ax, im, c in zip(grid, [im for im in im_list], [count for count in range(len(im_list))]): 43 | # Iterating over the grid returns the Axes. 44 | ax.imshow(im, cmap='gray') 45 | ax.xaxis.set_visible(False) 46 | ax.yaxis.set_visible(False) 47 | if range(ncols).count(c): 48 | ax.set_title(col_names[c]) 49 | if list(np.arange(nrows)*ncols).count(c): 50 | ax.yaxis.set_visible(True) 51 | ax.yaxis.set_ticks([]) 52 | ax.yaxis.set_label_text(row_names[int(c/ncols)], fontsize=12) 53 | 54 | plt.show() 55 | 56 | -------------------------------------------------------------------------------- /OCTOPUSLogo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUSLogo.png -------------------------------------------------------------------------------- /OCTOPUSLogo_Rect.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/OCTOPUSLogo_Rect.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

2 | 3 |

4 | 5 | [![DOI](https://zenodo.org/badge/214007114.svg)](https://zenodo.org/badge/latestdoi/214007114) 6 | 7 | # OCTOPUS: Off-resonance CorrecTion OPen-soUrce Software 8 | `OCTOPUS` is an open-source tool that provides off-resonance correction methods for Magnetic Resonance (MR) images. In particular, the implemented techniques are Conjugate Phase Reconstruction (CPR)[[1]](#references), frequency-segmented CPR [[2]](#references) and Multi-Frequency Interpolation (MFI) [[3]](#references). 9 | 10 | Off-resonance is a type of MR image artifact. It originates as an accumulation of phase from off-resonant spins along the read-out direction due to field inhomogeneities, tissue susceptibilities and chemical shift among other possible sources [[4]](#references). Long read-out k-space trajectories are therefore more prone to this artifact and its consequences on the image. The image effects are tipycally blurring and/or geometrical distortion, and consequently, quality deterioration [[5]](#references). 11 | 12 | `OCTOPUS` leverages existing techniques and outputs artifact-corrected or mitigated image reconstruction given the raw data from the scanner, k-space trajectory and field map. It is targeted to MR scientists, researchers, engineers and students who work with off-resonance-prone trajectories, such as spirals. 13 | 14 | To learn more about the used methods and their implementation visit the [API docs][api-docs]. 15 | 16 | ## Installation 17 | 1. Install Python (>=Python 3.6) 18 | 2. Create and activate a virtual environment (optional but recommended) 19 | 3. Copy and paste this command in your terminal 20 | ```pip install MR-OCTOPUS``` 21 | 22 | **Otherwise, [skip the installation!]** Run `OCTOPUS` in your browser instead. 23 | 24 | ## Quick start 25 | The [Examples folder] contains scripts and data to run off-resonance correction on numerical simulations and phantom images for different k-space trajectories and field maps. 26 | 27 | After the [installation] is completed, download the [example data]. Now you can run two types of demos. More information about these and other experiments can be found in the [Wiki page]. 28 | 29 | ### 1. Numerical simulations 30 | 31 | `numsim_cartesian.py` and `numsim_spiral.py` run a forward model on a 192x192 Shepp-Logan phantom image. They simulate the off-resonance effect of a cartesian and spiral k-space trajectory, respectively, given a simulated field map. 32 | 33 | With `OCTOPUS.fieldmap.simulate` you can experiment the effect of the type of field map and its frequency range on the output corrupted image. 34 | 35 | The corrupted image is then corrected using CPR, fs-CPR and MFI and the results are displayed. 36 | 37 | ### 2. In vitro experiment 38 | If you want to use `OCTOPUS` to correct real data, you can use `ORC_main.py` as a template. 39 | 1. Fill the `settings.ini` file with the paths for your inputs and outputs. NOTE: the default settings are configured to run the script using the sample data provided. 40 | 2. Input your field of view (FOV), gradient raster time (dt), and echo time (TE). 41 | ```python 42 | FOV = # meters 43 | dt = # seconds 44 | TE = # seconds 45 | ``` 46 | 3. Check that the dimensions of your inputs agree. 47 | `rawdata` dims = `ktraj` dims 48 | 5. Specify the number of frequency segments for the fs-CPR and MFI methods 49 | ```python 50 | Lx = # L=Lmin * Lx 51 | ``` 52 | 6. Run the script. 53 | The program will display an image panel with the original image and the corrected versions. 54 | 55 | ### 3. Command line implementation - **NEW!** 56 | Now you can easily run OCTOPUS using commands on your terminal. After installation, type: 57 | ```python 58 | OCTOPUS_cli path/to/container/folder rawdata_file kspace_trajectory_file field_map_file correction_method 59 | ``` 60 | For more information about the command line implemetation and its required arguments, type: 61 | ```python 62 | OCTOPUS_cli -h 63 | ``` 64 | For more information about how to structure your data, visit the [API docs][api-docs]. 65 | 66 | ## Skip the installation! - `OCTOPUS` in your browser 67 | 68 | There's no need to go through the installation process. Using this [template][colab-template] you can now run off-resonance correction in your browser! 69 | 70 | As a demo, you can use the [example data] provided for the [in vitro experiment]. 71 | 72 | ## Contributing and Community guidelines 73 | `OCTOPUS` adheres to a code of conduct adapted from the [Contributor Covenant] code of conduct. 74 | Contributing guidelines can be found [here][contrib-guidelines]. 75 | 76 | ## References 77 | 1. Maeda, A., Sano, K. and Yokoyama, T. (1988), Reconstruction by weighted correlation for MRI with time-varying gradients. IEEE Transactions on Medical Imaging, 7(1): 26-31. doi: 10.1109/42.3926 78 | 2. Noll, D. C., Pauly, J. M., Meyer, C. H., Nishimura, D. G. and Macovskj, A. (1992), Deblurring for non‐2D fourier transform magnetic resonance imaging. Magn. Reson. Med., 25: 319-333. doi:10.1002/mrm.1910250210 79 | 3. Man, L., Pauly, J. M. and Macovski, A. (1997), Multifrequency interpolation for fast off‐resonance correction. Magn. Reson. Med., 37: 785-792. doi:10.1002/mrm.1910370523 80 | 4. Noll, D. C., Meyer, C. H., Pauly, J. M., Nishimura, D. G. and Macovski, A. (1991), A homogeneity correction method for magnetic resonance imaging with time-varying gradients. IEEE Transactions on Medical Imaging, 10(4): 629-637. doi: 10.1109/42.108599 81 | 5. Schomberg, H. (1999), Off-resonance correction of MR images. IEEE Transactions on Medical Imaging, 18( 6): 481-495. doi: 10.1109/42.781014 82 | 83 | [api-docs]: https://mr-octopus.readthedocs.io/en/latest/ 84 | [Contributor Covenant]: http://contributor-covenant.org 85 | [contrib-guidelines]: https://github.com/imr-framework/OCTOPUS/blob/master/CONTRIBUTING.md 86 | [installation]: #installation 87 | [in vitro experiment]: #2-in-vitro-experiment 88 | [Examples folder]: https://github.com/imr-framework/OCTOPUS/tree/master/OCTOPUS/Examples 89 | [example data]: https://github.com/imr-framework/OCTOPUS/blob/master/OCTOPUS/Examples/examples_zip.zip 90 | [colab-template]: https://colab.research.google.com/drive/1hEIj5LaF19yOaWkSqi2uWXyy3u6UgKoP?usp=sharing 91 | [skip the installation!]: #skip-the-installation---octopus-in-your-browser 92 | [Wiki page]: https://github.com/imr-framework/OCTOPUS/wiki/Welcome-to-the-OCTOPUS-wiki! 93 | -------------------------------------------------------------------------------- /doc/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line, and also 5 | # from the environment for the first two. 6 | SPHINXOPTS ?= 7 | SPHINXBUILD ?= sphinx-build 8 | SOURCEDIR = source 9 | BUILDDIR = build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 21 | -------------------------------------------------------------------------------- /doc/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=source 11 | set BUILDDIR=build 12 | 13 | if "%1" == "" goto help 14 | 15 | %SPHINXBUILD% >NUL 2>NUL 16 | if errorlevel 9009 ( 17 | echo. 18 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 19 | echo.installed, then set the SPHINXBUILD environment variable to point 20 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 21 | echo.may add the Sphinx directory to PATH. 22 | echo. 23 | echo.If you don't have Sphinx installed, grab it from 24 | echo.http://sphinx-doc.org/ 25 | exit /b 1 26 | ) 27 | 28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 29 | goto end 30 | 31 | :help 32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 33 | 34 | :end 35 | popd 36 | -------------------------------------------------------------------------------- /doc/source/OCTOPUS.fieldmap.rst: -------------------------------------------------------------------------------- 1 | OCTOPUS.fieldmap subpackage 2 | =========================== 3 | 4 | OCTOPUS.fieldmap.simulate module 5 | -------------------------------- 6 | 7 | .. automodule:: OCTOPUS.fieldmap.simulate 8 | :members: 9 | :undoc-members: 10 | :show-inheritance: 11 | 12 | OCTOPUS.fieldmap.unwrap module 13 | ------------------------------ 14 | 15 | .. automodule:: OCTOPUS.fieldmap.unwrap 16 | :members: 17 | :undoc-members: 18 | :show-inheritance: 19 | 20 | -------------------------------------------------------------------------------- /doc/source/OCTOPUS.recon.rst: -------------------------------------------------------------------------------- 1 | OCTOPUS.recon subpackage 2 | ======================== 3 | 4 | OCTOPUS.recon.imtransforms module 5 | --------------------------------- 6 | 7 | .. automodule:: OCTOPUS.recon.imtransforms 8 | :members: 9 | :undoc-members: 10 | :show-inheritance: 11 | 12 | OCTOPUS.recon.rawdata\_recon module 13 | ----------------------------------- 14 | 15 | .. automodule:: OCTOPUS.recon.rawdata_recon 16 | :members: 17 | :undoc-members: 18 | :show-inheritance: 19 | -------------------------------------------------------------------------------- /doc/source/OCTOPUS.rst: -------------------------------------------------------------------------------- 1 | OCTOPUS package 2 | =============== 3 | 4 | OCTOPUS.ORC module 5 | ------------------ 6 | 7 | .. automodule:: OCTOPUS.ORC 8 | :members: 9 | :undoc-members: 10 | :show-inheritance: 11 | 12 | 13 | Subpackages 14 | ----------- 15 | 16 | .. toctree:: 17 | :maxdepth: 4 18 | 19 | OCTOPUS.fieldmap 20 | OCTOPUS.recon 21 | OCTOPUS.utils 22 | 23 | 24 | -------------------------------------------------------------------------------- /doc/source/OCTOPUS.utils.rst: -------------------------------------------------------------------------------- 1 | OCTOPUS.utils subpackage 2 | ======================== 3 | 4 | OCTOPUS.utils.dataio module 5 | --------------------------- 6 | 7 | .. automodule:: OCTOPUS.utils.dataio 8 | :members: 9 | :undoc-members: 10 | :show-inheritance: 11 | 12 | OCTOPUS.utils.metrics module 13 | ---------------------------- 14 | 15 | .. automodule:: OCTOPUS.utils.metrics 16 | :members: 17 | :undoc-members: 18 | :show-inheritance: 19 | 20 | OCTOPUS.utils.plotting module 21 | ----------------------------- 22 | 23 | .. automodule:: OCTOPUS.utils.plotting 24 | :members: 25 | :undoc-members: 26 | :show-inheritance: 27 | 28 | -------------------------------------------------------------------------------- /doc/source/conf.py: -------------------------------------------------------------------------------- 1 | # Configuration file for the Sphinx documentation builder. 2 | # 3 | # This file only contains a selection of the most common options. For a full 4 | # list see the documentation: 5 | # https://www.sphinx-doc.org/en/master/usage/configuration.html 6 | 7 | # -- Path setup -------------------------------------------------------------- 8 | 9 | # If extensions (or modules to document with autodoc) are in another directory, 10 | # add these directories to sys.path here. If the directory is relative to the 11 | # documentation root, use os.path.abspath to make it absolute, like shown here. 12 | # 13 | import os 14 | import sys 15 | import sphinx_rtd_theme 16 | sys.path.insert(0, os.path.abspath('../../')) 17 | 18 | 19 | # -- Project information ----------------------------------------------------- 20 | 21 | project = 'OCTOPUS' 22 | copyright = '2020, Columbia University in the City of New York' 23 | author = 'Marina Manso Jimeno' 24 | 25 | # The full version, including alpha/beta/rc tags 26 | release = '1' 27 | 28 | 29 | # -- General configuration --------------------------------------------------- 30 | master_doc = 'index' 31 | # Add any Sphinx extension module names here, as strings. They can be 32 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 33 | # ones. 34 | extensions = ['sphinx.ext.napoleon','sphinx.ext.autodoc', 'sphinx_rtd_theme'] 35 | 36 | # Add any paths that contain templates here, relative to this directory. 37 | templates_path = ['_templates'] 38 | 39 | # List of patterns, relative to source directory, that match files and 40 | # directories to ignore when looking for source files. 41 | # This pattern also affects html_static_path and html_extra_path. 42 | exclude_patterns = [] 43 | 44 | 45 | # -- Options for HTML output ------------------------------------------------- 46 | 47 | # The theme to use for HTML and HTML Help pages. See the documentation for 48 | # a list of builtin themes. 49 | # 50 | #html_theme = 'alabaster' 51 | html_theme = "sphinx_rtd_theme" 52 | # Add any paths that contain custom static files (such as style sheets) here, 53 | # relative to this directory. They are copied after the builtin static files, 54 | # so a file named "default.css" will overwrite the builtin "default.css". 55 | html_static_path = ['_static'] -------------------------------------------------------------------------------- /doc/source/index.rst: -------------------------------------------------------------------------------- 1 | .. OCTOPUS documentation master file, created by 2 | sphinx-quickstart on Sat Jul 18 10:53:09 2020. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | OCTOPUS: Off-resonance CorrecTion OPen-soUrce Software 7 | ====================================================== 8 | .. image:: ../../OCTOPUSLogo_Rect.png 9 | :align: center 10 | 11 | `OCTOPUS `_ is an open-source tool that provides off-resonance correction methods for Magnetic Resonance (MR) images. In particular, the implemented techniques are Conjugate Phase Reconstruction (CPR) [1]_, frequency-segmented CPR [2]_ and Multi-Frequency Interpolation (MFI) [3]_. 12 | 13 | It is targeted to MR scientists, researchers, engineers and students who work with off-resonance-prone trajectories, such as spirals. 14 | 15 | .. [1] Maeda, A., Sano, K. and Yokoyama, T. (1988), Reconstruction by weighted correlation for MRI with time-varying gradients. IEEE Transactions on Medical Imaging, 7(1): 26-31. doi: 10.1109/42.3926 16 | 17 | .. [2] Noll, D. C., Pauly, J. M., Meyer, C. H., Nishimura, D. G. and Macovskj, A. (1992), Deblurring for non‐2D fourier transform magnetic resonance imaging. Magn. Reson. Med., 25: 319-333. doi:10.1002/mrm.1910250210 18 | 19 | .. [3] Man, L., Pauly, J. M. and Macovski, A. (1997), Multifrequency interpolation for fast off‐resonance correction. Magn. Reson. Med., 37: 785-792. doi:10.1002/mrm.1910370523 20 | 21 | .. automodule:: OCTOPUS 22 | :members: 23 | 24 | API documentation 25 | ================= 26 | 27 | .. toctree:: 28 | :maxdepth: 7 29 | 30 | modules 31 | 32 | Tools 33 | ===== 34 | * :ref:`search` -------------------------------------------------------------------------------- /doc/source/modules.rst: -------------------------------------------------------------------------------- 1 | OCTOPUS 2 | ======= 3 | 4 | .. toctree:: 5 | :maxdepth: 4 6 | 7 | OCTOPUS -------------------------------------------------------------------------------- /paper.bib: -------------------------------------------------------------------------------- 1 | @article{Smith2010, 2 | abstract = {Artifacts appear in MRI for a variety of reasons. Potential sources of artifacts include nonideal hardware characteristics, intrinsic tissue properties and biological behavior, assumptions underlying the data acquisition and image reconstruction process, and poor choice of scanning parameters. Careful study design and scanning protocols can prevent certain artifacts from occurring, but some are unavoidable. Numerous correction methods have been developed to mitigate the corruptive effects of artifacts and improve image diagnostic quality. These methods include special pulse sequence designs, improved scanning procedures and equipment, and advanced postprocessing algorithms. Recognizing artifacts and understanding their underlying causes are important when interpreting images and choosing a correction approach. {\textcopyright} 2010 Future Medicine Ltd.}, 3 | author = {Smith, Travis B. and Nayak, Krishna S.}, 4 | doi = {10.2217/iim.10.33}, 5 | issn = {17555191}, 6 | journal = {Imaging in Medicine}, 7 | keywords = {MRI,artifact correction,artifacts,magnetic resonance imaging,motion,off-resonance}, 8 | mendeley-groups = {OCTOPUS-JOSS}, 9 | number = {4}, 10 | pages = {445--457}, 11 | title = {{MRI artifacts and correction strategies}}, 12 | volume = {2}, 13 | year = {2010} 14 | } 15 | 16 | @article{Noll1991, 17 | abstract = {When time-varying gradients are used for imaging, the off-resonance behavior does not just cause geometric distortion as is the case with spin-warp imaging, but changes the shape of the impulse response and causes blurring. This effect is well known for projection reconstruction and spiral k-space scanning sequences. We introduce here a reconstruction and homogeneity correction method to correct for the zeroth order effects of inhomogeneity using a priori knowledge of the inhomogeneity. In this method, the data are segmented according to collection time, reconstructed using some fast, linear algorithm, corrected for inhomogeneity and then superimposed to yield a homogeneity corrected image. This segmented method is compared to a conjugate phase reconstruction in terms of degree of correction and execution time. We apply this method to in vivo images using projection-reconstruction and spiralscan sequences. {\textcopyright} 1991 IEEE}, 18 | author = {Noll, Douglas C. and Meyer, Craig H. and Pauly, John M. and Nishimura, Dwight G. and Macovski, Albert}, 19 | doi = {10.1109/42.108599}, 20 | issn = {1558254X}, 21 | journal = {IEEE Transactions on Medical Imaging}, 22 | mendeley-groups = {OCTOPUS-JOSS}, 23 | number = {4}, 24 | pages = {629--637}, 25 | title = {{A Homogeneity Correction Method for Magnetic Resonance Imaging with Time-Vaiying Gradients}}, 26 | volume = {10}, 27 | year = {1991} 28 | } 29 | 30 | @article{LukPat2001, 31 | abstract = {Off-resonant spins, produced by chemical shift, tissue-susceptibility differences, or main-field inhomogeneity, can cause blurring or shifts, severely compromising the diagnostic value of magnetic resonance images. To mitigate these off-resonance effects, the authors propose a technique whereby two images are acquired at different echo times (TEs) and interpolated to produce a single image with dramatically-reduced blurring. The phase difference of these two images is not used to produce a field map; instead, the weighted complex-valued average of the two images is used to produce a single image. Previously-described methods reconstruct a set of preliminary images, each at a different off-resonant frequency, and then assemble these into one final image, choosing the best off-resonant frequency for each voxel. Compared to these methods, the proposed technique requires the same or less processing time and is much less sensitive to errors in the field map. This technique was applied to centric-ordered EPI but it can be applied to any imaging trajectory, including one-shot EPI, spiral imaging, projection-reconstruction imaging, and 2D GRASE. {\textcopyright} 2001 Wiley-Liss, Inc.}, 32 | author = {Luk-Pat, Gerard T. and Nishimura, Dwight G.}, 33 | doi = {10.1002/1522-2594(200102)45:2<269::AID-MRM1036>3.0.CO;2-5}, 34 | issn = {07403194}, 35 | journal = {Magnetic Resonance in Medicine}, 36 | keywords = {Coronary-artery imaging,Echo-planar imaging,Inhomogeneity,K-space,Off-resonance}, 37 | mendeley-groups = {OCTOPUS-JOSS}, 38 | number = {2}, 39 | pages = {269--276}, 40 | title = {{Reducing off-resonance distortion by echo-time interpolation}}, 41 | volume = {45}, 42 | year = {2001} 43 | } 44 | 45 | @article{Chen2008, 46 | abstract = {Spiral scanning is a promising MRI method, but one limitation is that off-resonance effects can cause image blurring. Most current off-resonance correction methods for spiral imaging require an accurate field map, which is difficult to obtain in many applications. Automatic methods can perform off-resonance correction without acquiring a field map. However, these methods are computationally inefficient and relatively prone to estimation error. This study describes a new semiautomatic off-resonance correction method that combines an automatic method with a low resolution field map acquisition for off-resonance correction in spiral scanning. Experiments demonstrate that this method is more robust than conventional automatic off-resonance correction and can provide more accurate off-resonance correction than conventional field map based methods. The proposed method is also computationally efficient and has been implemented for online reconstruction. {\textcopyright} 2008 Wiley-Liss, Inc.}, 47 | author = {Chen, Weitian and Meyer, Craig H.}, 48 | doi = {10.1002/mrm.21599}, 49 | issn = {07403194}, 50 | journal = {Magnetic Resonance in Medicine}, 51 | keywords = {Automatic deblurring,Deblurring,MRI,Off-resonance correction,Spiral}, 52 | mendeley-groups = {OCTOPUS-JOSS}, 53 | pages = {1212--1219}, 54 | title = {{Semiautomatic off-resonance correction in spiral imaging}}, 55 | volume = {59}, 56 | year = {2008} 57 | } 58 | 59 | @article{Schomberg1999, 60 | abstract = {In magnetic resonance imaging (MRI), the spatial inhomogeneity of the static magnetic field can cause degraded images if the reconstruction is based on inverse Fourier transformation. This paper presents and discusses a range of fast reconstruction algorithms that attempt to avoid such degradation by taking the field inhomogeneity into account. Some of these algorithms are new, others are modified versions of known algorithms. Speed and accuracy of all these algorithms are demonstrated using spiral MRI.}, 61 | author = {Schomberg, Hermann}, 62 | doi = {10.1109/42.781014}, 63 | issn = {02780062}, 64 | journal = {IEEE Transactions on Medical Imaging}, 65 | mendeley-groups = {OCTOPUS-JOSS}, 66 | number = {6}, 67 | pages = {481--495}, 68 | title = {{Off-resonance correction of MR images}}, 69 | volume = {18}, 70 | year = {1999} 71 | } 72 | 73 | @article{Maeda1988, 74 | abstract = {A general reconstruction algorithm for magnetic resonance imaging (MRI) with gradients having arbitrary time dependence is presented. This method estimates spin density by calculating the “weighted” correlation of the observed FID signal and the phase modulation function at each point. A theorem which states that this method can be derived from the conditions of linearity and shift invariance is presented. Since these conditions are general, most of the MRI reconstruction algorithms proposed so far are equivalent to the weighted correlation method. An explicit representation of the point spread function (PSF) in the weighted correlation method is given. By using this representation, a method to control the PSF and the static field inhomogeneity effects are studied. A correction method for the inhomogeneity effects is proposed, and a limitation is clarified. Some simulation results are also presented. {\textcopyright} 1988 IEEE}, 75 | author = {Maeda, Akira and Sano, Koichi and Yokoyama, Tetsuo}, 76 | doi = {10.1109/42.3926}, 77 | issn = {1558254X}, 78 | journal = {IEEE Transactions on Medical Imaging}, 79 | mendeley-groups = {OCTOPUS-JOSS}, 80 | number = {1}, 81 | pages = {26--31}, 82 | title = {{Reconstruction by Weighted Correlation for MRI with Time-Varying Gradients}}, 83 | volume = {7}, 84 | year = {1988} 85 | } 86 | 87 | @article{Noll1992, 88 | abstract = {For several non‐2D Fourier transform imaging methods, off‐resonant reconstruction does not just cause geometric distortion, but changes the shape of the point spread function and causes blurring. This effect is well known for projection reconstruction and spiral k‐space scanning sequences. We introduce here a method that automatically removes blur introduced by magnetic field inhomogeneity and susceptibility without using a resonant frequency map, making these imaging methods more useful. In this method, the raw data are modulated to several different frequencies and reconstructed to create a series of base images. Determination of degree of blur is done by calculating a focusing measure for each point in each base image and a composite image is then constructed using only the unblurred regions from each base image. This method has been successfully applied to phantom and in vivo images using projection‐reconstruction and spiral‐scan sequences. {\textcopyright} 1992 Academic Press, Inc. Copyright {\textcopyright} 1992 Wiley‐Liss, Inc., A Wiley Company}, 89 | author = {Noll, Douglas C. and Pauly, John M. and Meyer, Craig H. and Nishimura, Dwight G. and Macovskj, Albert}, 90 | doi = {10.1002/mrm.1910250210}, 91 | issn = {15222594}, 92 | journal = {Magnetic Resonance in Medicine}, 93 | mendeley-groups = {OCTOPUS-JOSS}, 94 | number = {2}, 95 | pages = {319--333}, 96 | title = {{Deblurring for non‐2D fourier transform magnetic resonance imaging}}, 97 | volume = {25}, 98 | year = {1992} 99 | } 100 | 101 | @article{Man1997, 102 | abstract = {Field inhomogeneities or susceptibility variations produce blurring in images acquired using non-2DFT k-space readout trajectories. This problem is more pronounced for sequences with long readout times such as spiral imaging. Theoretical and practical correction methods based on an acquired field map have been reported in the past. This paper introduces a new correction method based on the existing concept of frequency segmented correction but which is faster and theoretically more accurate. It consists of reconstructing the data at several frequencies to form a set of base images that are then added together with spatially varying linear coefficients derived from the field map. The new algorithm is applied to phantom and in vivo images acquired with projection reconstruction end spiral sequences, yielding sharply focused images.}, 103 | author = {Man, Lai Chee and Pauly, John M. and Macovski, Albert}, 104 | doi = {10.1002/mrm.1910370523}, 105 | issn = {07403194}, 106 | journal = {Magnetic Resonance in Medicine}, 107 | keywords = {MRI,blur,inhomogeneity,spiral}, 108 | mendeley-groups = {OCTOPUS-JOSS}, 109 | number = {5}, 110 | pages = {785--792}, 111 | pmid = {9126954}, 112 | title = {{Multifrequency interpolation for fast off-resonance correction}}, 113 | volume = {37}, 114 | year = {1997} 115 | } 116 | @article{Lustig2010, 117 | abstract = {A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction, is presented. It is a generalized reconstruction framework based on self-consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets and conjugate gradient algorithms. Phantom and in vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. {\textcopyright} 2010 Wiley-Liss, Inc.}, 118 | author = {Lustig, Michael and Pauly, John M.}, 119 | doi = {10.1002/mrm.22428}, 120 | issn = {07403194}, 121 | journal = {Magnetic Resonance in Medicine}, 122 | keywords = {Autocalibration,Compressed sensing,GRAPPA,Image reconstruction,Iterative reconstruction,Parallel imaging,SENSE}, 123 | mendeley-groups = {OCTOPUS-JOSS}, 124 | number = {2}, 125 | pages = {457--471}, 126 | pmid = {20665790}, 127 | title = {{SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space}}, 128 | url = {https://people.eecs.berkeley.edu/{~}mlustig/Software.html}, 129 | volume = {64}, 130 | year = {2010} 131 | } 132 | 133 | @article{Ostenson2017, 134 | abstract = {Magnetic resonance fingerprinting (MRF) pulse sequences often employ spiral trajectories for data readout. Spiral k-space acquisitions are vulnerable to blurring in the spatial domain in the presence of static field off-resonance. This work describes a blurring correction algorithm for use in spiral MRF and demonstrates its effectiveness in phantom and in vivo experiments. Results show that image quality of T1 and T2 parametric maps is improved by application of this correction. This MRF correction has negligible effect on the concordance correlation coefficient and improves coefficient of variation in regions of off-resonance relative to uncorrected measurements.}, 135 | author = {Ostenson, Jason and Robison, Ryan K. and Zwart, Nicholas R. and Welch, E. Brian}, 136 | doi = {10.1016/j.mri.2017.07.004}, 137 | issn = {18735894}, 138 | journal = {Magnetic Resonance Imaging}, 139 | keywords = {Magnetic resonance fingerprinting,Off-resonance,Relaxometry,Spiral trajectory}, 140 | mendeley-groups = {OCTOPUS-JOSS}, 141 | pages = {63--72}, 142 | title = {{Multi-frequency interpolation in spiral magnetic resonance fingerprinting for correction of off-resonance blurring}}, 143 | url = {https://github.com/jostenson/MRI{\_}Ostenson{\_}MRF{\_}MFI}, 144 | volume = {41}, 145 | year = {2017} 146 | } 147 | 148 | 149 | @article{Jenkinson2012, 150 | abstract = {FSL (the FMRIB Software Library) is a comprehensive library of analysis tools for functional, structural and diffusion MRI brain imaging data, written mainly by members of the Analysis Group, FMRIB, Oxford. For this NeuroImage special issue on“20years offMRI”we have beenaskedtowrite about the history, developments and current status of FSL.We also include some descriptions of parts of FSL that are not well covered in the existing literature.We hope that some of this contentmight be of interest to users of FSL, and alsomaybe to newresearch groups considering creating, releasing and supporting newsoftware packages for brain image analysis.}, 151 | author = {Jenkinson, Mark and Beckmann, Christian F and Behrens, Timothy E J and Woolrich, Mark W and Smith, Stephen M}, 152 | doi = {10.1016/j.neuroimage.2011.09.015}, 153 | journal = {NeuroImage}, 154 | mendeley-groups = {OCTOPUS-JOSS}, 155 | number = {2}, 156 | pages = {782--790}, 157 | title = {{Review FSL}}, 158 | url = {https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FUGUE}, 159 | volume = {62}, 160 | year = {2012} 161 | } 162 | 163 | 164 | @article{Sutton2003, 165 | abstract = {In magnetic resonance imaging, magnetic field inhomogeneities cause distortions in images that are reconstructed by conventional last Fourier transform (FFT) methods. Several noniterative image reconstruction methods are used currently to compensate for field inhomogeneities, but these methods assume that the field map that characterizes the off-resonance frequencies is spatially smooth. Recently, iterative methods have been proposed that can circumvent this assumption and provide improved compensation for off-resonance effects. However, straightforward implementations of such iterative methods suffer from inconveniently long computation times. This paper describes a tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation. We use a min-max formulation to derive the temporal interpolator. Speedups of around 60 were achieved by combining this temporal interpolator with a nonuniform fast Fourier transform with normalized root mean squared approximation errors of 0.07{\%}. The proposed method provides fast, accurate, field-corrected image reconstruction even when the field map is not smooth.}, 166 | author = {Sutton, Bradley P. and Noll, Douglas C. and Fessler, Jeffrey A.}, 167 | doi = {10.1109/TMI.2002.808360}, 168 | issn = {02780062}, 169 | journal = {IEEE Transactions on Medical Imaging}, 170 | keywords = {Field inhomogeneity correction,Image reconstruction,Iterative methods,Magnetic resonance imaging,Temporal interpolation,Time segmentation}, 171 | mendeley-groups = {OCTOPUS-JOSS}, 172 | number = {2}, 173 | pages = {178--188}, 174 | pmid = {12715994}, 175 | title = {{Fast, iterative image reconstruction for MRI in the presence of field inhomogeneities}}, 176 | url = {https://web.eecs.umich.edu/{~}fessler/code/}, 177 | volume = {22}, 178 | year = {2003} 179 | } 180 | 181 | 182 | @article{Fessler2005, 183 | abstract = {In some types of magnetic resonance (MR) imaging, particularly functional brain scans, the conventional Fourier model for the measurements is inaccurate. Magnetic field inhomogeneities, which are caused by imperfect main fields and by magnetic susceptibility variations, induce distortions in images that are reconstructed by conventional Fourier methods. These artifacts hamper the use of functional MR imaging (fMRI) in brain regions near air/tissue interfaces. Recently, iterative methods that combine the conjugate gradient (CG) algorithm with nonuniform FFT (NUFFT) operations have been shown to provide considerably improved image quality relative to the conjugate-phase method. However, for non-Cartesian k-space trajectories, each CG-NUFFT iteration requires numerous k-space interpolations; these are operations that are computationally expensive and poorly suited to fast hardware implementations. This paper proposes a faster iterative approach to field-corrected MR image reconstruction based on the CG algorithm and certain Toeplitz matrices. This CG-Toeplitz approach requires k-space interpolations only for the initial iteration; thereafter, only fast Fourier transforms (FFTs) are required. Simulation results show that the proposed CG-Toeplitz approach produces equivalent image quality as th CG-NUFFT method with significantly reduced computation time. {\textcopyright} 2005 IEEE.}, 184 | author = {Fessler, Jeffrey A. and Lee, Sangwoo and Olafsson, Valur T. and Shi, Hugo R. and Noll, Douglas C.}, 185 | doi = {10.1109/TSP.2005.853152}, 186 | issn = {1053587X}, 187 | journal = {IEEE Transactions on Signal Processing}, 188 | keywords = {Magnetic susceptibility,Non-Cartesian sampling,Spiral trajectory,fMRI imaging}, 189 | mendeley-groups = {OCTOPUS-JOSS}, 190 | number = {9}, 191 | pages = {3393 -- 3402}, 192 | title = {{Toeplitz-based iterative image reconstruction for MRI with correction for magnetic field inhomogeneity}}, 193 | url = {https://web.eecs.umich.edu/{~}fessler/code/}, 194 | volume = {53}, 195 | year = {2005} 196 | } 197 | 198 | 199 | @phdthesis{Nylund2014, 200 | author = {Nylund, Andreas}, 201 | mendeley-groups = {OCTOPUS-JOSS}, 202 | pages = {43}, 203 | school = {KTH Royal Institute of Technology}, 204 | title = {{Off-resonance correction for magnetic resonance imaging with spiral trajectories}}, 205 | year = {2014} 206 | } 207 | 208 | @article{Ravi2018, 209 | abstract = {Purpose: To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. Methods: We integrated the “Pulseq” platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only (“MATLAB-Pulseq”) or Python only (“Python-Pulseq”) were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. Results: In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. Conclusion: The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations.}, 210 | author = {Ravi, Keerthi Sravan and Potdar, Sneha and Poojar, Pavan and Reddy, Ashok Kumar and Kroboth, Stefan and Nielsen, Jon Fredrik and Zaitsev, Maxim and Venkatesan, Ramesh and Geethanath, Sairam}, 211 | doi = {10.1016/j.mri.2018.03.008}, 212 | issn = {18735894}, 213 | journal = {Magnetic Resonance Imaging}, 214 | keywords = {Graphical Programming Interface,MR method development tools,Open source pulse sequence design and rapid protot,Pulseq,Python Pulseq,Vendor neutral pulse sequence programming tools}, 215 | mendeley-groups = {OCTOPUS-JOSS}, 216 | pages = {9--15}, 217 | pmid = {29540330}, 218 | title = {{Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development}}, 219 | volume = {52}, 220 | year = {2018} 221 | } 222 | 223 | @article{2020NumPy, 224 | author = {Harris, Charles R. and Millman, K. Jarrod and 225 | van der Walt, Stéfan J and Gommers, Ralf and 226 | Virtanen, Pauli and Cournapeau, David and 227 | Wieser, Eric and Taylor, Julian and Berg, Sebastian and 228 | Smith, Nathaniel J. and Kern, Robert and Picus, Matti and 229 | Hoyer, Stephan and van Kerkwijk, Marten H. and 230 | Brett, Matthew and Haldane, Allan and 231 | Fernández del Río, Jaime and Wiebe, Mark and 232 | Peterson, Pearu and Gérard-Marchant, Pierre and 233 | Sheppard, Kevin and Reddy, Tyler and Weckesser, Warren and 234 | Abbasi, Hameer and Gohlke, Christoph and 235 | Oliphant, Travis E.}, 236 | title = {Array programming with {NumPy}}, 237 | journal = {Nature}, 238 | year = {2020}, 239 | volume = {585}, 240 | pages = {357–362}, 241 | doi = {10.1038/s41586-020-2649-2} 242 | } 243 | 244 | @article{2020SciPy, 245 | author = {Virtanen, Pauli and Gommers, Ralf and Oliphant, Travis E. and 246 | Haberland, Matt and Reddy, Tyler and Cournapeau, David and 247 | Burovski, Evgeni and Peterson, Pearu and Weckesser, Warren and 248 | Bright, Jonathan and {van der Walt}, St{\'e}fan J. and 249 | Brett, Matthew and Wilson, Joshua and Millman, K. Jarrod and 250 | Mayorov, Nikolay and Nelson, Andrew R. J. and Jones, Eric and 251 | Kern, Robert and Larson, Eric and Carey, C J and 252 | Polat, {\.I}lhan and Feng, Yu and Moore, Eric W. and 253 | {VanderPlas}, Jake and Laxalde, Denis and Perktold, Josef and 254 | Cimrman, Robert and Henriksen, Ian and Quintero, E. A. and 255 | Harris, Charles R. and Archibald, Anne M. and 256 | Ribeiro, Ant{\^o}nio H. and Pedregosa, Fabian and 257 | {van Mulbregt}, Paul and {SciPy 1.0 Contributors}}, 258 | title = {{{SciPy} 1.0: Fundamental Algorithms for Scientific 259 | Computing in Python}}, 260 | journal = {Nature Methods}, 261 | year = {2020}, 262 | volume = {17}, 263 | pages = {261--272}, 264 | adsurl = {https://rdcu.be/b08Wh}, 265 | doi = {10.1038/s41592-019-0686-2}, 266 | } 267 | 268 | @software{Nibabel, 269 | author = {Brett, Matthew and 270 | Markiewicz, Christopher J. and 271 | Hanke, Michael and 272 | Côté, Marc-Alexandre and 273 | Cipollini, Ben and 274 | McCarthy, Paul and 275 | Cheng, Christopher P. and 276 | Halchenko, Yaroslav O. and 277 | Cottaar, Michiel and 278 | Ghosh, Satrajit and 279 | Larson, Eric and 280 | Wassermann, Demian and 281 | Gerhard, Stephan and 282 | Lee, Gregory R. and 283 | Wang, Hao-Ting and 284 | Kastman, Erik and 285 | Rokem, Ariel and 286 | Madison, Cindee and 287 | Morency, Félix C. and 288 | Moloney, Brendan and 289 | Goncalves, Mathias and 290 | Riddell, Cameron and 291 | Burns, Christopher and 292 | Millman, Jarrod and 293 | Gramfort, Alexandre and 294 | Leppäkangas, Jaakko and 295 | Markello, Ross and 296 | van den Bosch, Jasper J.F. and 297 | Vincent, Robert D. and 298 | Braun, Henry and 299 | Subramaniam, Krish and 300 | Jarecka, Dorota and 301 | Gorgolewski, Krzysztof J. and 302 | Raamana, Pradeep Reddy and 303 | Nichols, B. Nolan and 304 | Baker, Eric M. and 305 | Hayashi, Soichi and 306 | Pinsard, Basile and 307 | Haselgrove, Christian and 308 | Hymers, Mark and 309 | Esteban, Oscar and 310 | Koudoro, Serge and 311 | Oosterhof, Nikolaas N. and 312 | Amirbekian, Bago and 313 | Nimmo-Smith, Ian and 314 | Nguyen, Ly and 315 | Reddigari, Samir and 316 | St-Jean, Samuel and 317 | Panfilov, Egor and 318 | Garyfallidis, Eleftherios and 319 | Varoquaux, Gael and 320 | Kaczmarzyk, Jakub and 321 | Legarreta, Jon Haitz and 322 | Hahn, Kevin S. and 323 | Hinds, Oliver P. and 324 | Fauber, Bennet and 325 | Poline, Jean-Baptiste and 326 | Stutters, Jon and 327 | Jordan, Kesshi and 328 | Cieslak, Matthew and 329 | Moreno, Miguel Estevan and 330 | Haenel, Valentin and 331 | Schwartz, Yannick and 332 | Darwin, Benjamin C and 333 | Thirion, Bertrand and 334 | Papadopoulos Orfanos, Dimitri and 335 | Pérez-García, Fernando and 336 | Solovey, Igor and 337 | Gonzalez, Ivan and 338 | Palasubramaniam, Jath and 339 | Lecher, Justin and 340 | Leinweber, Katrin and 341 | Raktivan, Konstantinos and 342 | Fischer, Peter and 343 | Gervais, Philippe and 344 | Gadde, Syam and 345 | Ballinger, Thomas and 346 | Roos, Thomas and 347 | Reddam, Venkateswara Reddy and 348 | Baratz, Zvi and 349 | freec84}, 350 | title = {nipy/nibabel: 3.0.2}, 351 | month = mar, 352 | year = 2020, 353 | publisher = {Zenodo}, 354 | version = {3.0.2}, 355 | doi = {10.5281/zenodo.3701467}, 356 | url = {https://doi.org/10.5281/zenodo.3701467} 357 | } 358 | 359 | @Article{Matplotlib, 360 | Author = {Hunter, J. D.}, 361 | Title = {Matplotlib: A 2D graphics environment}, 362 | Journal = {Computing in Science \& Engineering}, 363 | Volume = {9}, 364 | Number = {3}, 365 | Pages = {90--95}, 366 | abstract = {Matplotlib is a 2D graphics package used for Python for 367 | application development, interactive scripting, and publication-quality 368 | image generation across user interfaces and operating systems.}, 369 | publisher = {IEEE COMPUTER SOC}, 370 | doi = {10.1109/MCSE.2007.55}, 371 | year = 2007 372 | } 373 | 374 | @misc{itseez2015opencv, 375 | title={Open Source Computer Vision Library}, 376 | author={Itseez}, 377 | year={2015}, 378 | howpublished = {\url{https://github.com/itseez/opencv}} 379 | } 380 | 381 | @software{darcy_mason_2020_3891702, 382 | author = {Darcy Mason and 383 | scaramallion and 384 | rhaxton and 385 | mrbean-bremen and 386 | Jonathan Suever and 387 | Vanessasaurus and 388 | Guillaume Lemaitre and 389 | Dimitri Papadopoulos Orfanos and 390 | Aditya Panchal and 391 | Alex Rothberg and 392 | Joan Massich and 393 | James Kerns and 394 | Korijn van Golen and 395 | Thomas Robitaille and 396 | moloney and 397 | Matthew Shun-Shin and 398 | pawelzajdel and 399 | Markus Mattes and 400 | Blair Conrad and 401 | Félix C. Morency and 402 | Markus D. Herrmann and 403 | Hans Meine and 404 | Kevin S. Hahn and 405 | Masahiro Wada and 406 | colonelfazackerley and 407 | ferdymercury and 408 | huicpc0207 and 409 | Adam Klimont and 410 | Andrey Fedorov and 411 | Callan Bryant}, 412 | title = {pydicom/pydicom: pydicom 2.0.0}, 413 | month = may, 414 | year = 2020, 415 | publisher = {Zenodo}, 416 | version = {v2.0.0}, 417 | doi = {10.5281/zenodo.3891702}, 418 | url = {https://doi.org/10.5281/zenodo.3891702} 419 | } 420 | @Misc{pynufft, 421 | author = {Jyh-Miin Lin}, 422 | title = {{Pynufft}: {Python} non-uniform fast {F}ourier transform}, 423 | year = {2013–}, 424 | url = {https://github.com/jyhmiinlin/pynufft}, 425 | note = {Online; https://github.com/jyhmiinlin/pynufft; Dec 2016} 426 | } 427 | 428 | 429 | @article{scikit-image, 430 | author = {van der Walt, St{\'{e}}fan and Sch{\"{o}}nberger, Johannes L and Nunez-Iglesias, Juan and Boulogne, Fran{\c{c}}ois and Warner, Joshua D and Yager, Neil and Gouillart, Emmanuelle and Yu, Tony and The scikit-image contributors}, 431 | doi = {10.7717/peerj.453}, 432 | issn = {2167-8359}, 433 | journal = {PeerJ}, 434 | keywords = { Education, Open source, Python, Reproducible research, Scientific programming, Visualization,Image processing}, 435 | mendeley-groups = {OCTOPUS-JOSS}, 436 | pages = {e453}, 437 | title = {{scikit-image: image processing in {\{}P{\}}ython}}, 438 | url = {https://doi.org/10.7717/peerj.453}, 439 | volume = {2}, 440 | year = {2014} 441 | } 442 | 443 | @article{Shepp1974, 444 | abstract = {The authors generalize a reconstruction algorithm proposed by Ramachandran. The modified weighting function simultaneously achieves accuracy, simplicity, low computation time, as well as low sensitivity to noise. Using a simulated phantom, the authors compare the Fourier algorithm and a search algorithm, very similar to one described by G. N. Hounsfield. The search algorithm required 12 iterations to obtain a reconstruction of accuracy and resolution comparable to that of the Fourier reconstruction, and was more sensitive to noise. To speed the search algorithm by using fewer iterations leaves decreased resolution in the region just inside the skull which could mask a subdural hematoma.}, 445 | author = {Shepp, L. A. and Logan, Benjamin F.}, 446 | doi = {10.1109/tns.1974.6499235}, 447 | issn = {00189499}, 448 | journal = {IEEE Transactions on Nuclear Science}, 449 | mendeley-groups = {OCTOPUS-JOSS}, 450 | pages = {21--43}, 451 | title = {{FOURIER RECONSTRUCTION OF A HEAD SECTION.}}, 452 | volume = {21}, 453 | year = {1974} 454 | } 455 | 456 | @book{Gonzalez2001, 457 | author = {Gonzalez, Rafael C. and Woods, Richard E.}, 458 | title = {Digital Image Processing}, 459 | year = {2001}, 460 | isbn = {0201180758}, 461 | publisher = {Addison-Wesley Longman Publishing Co., Inc.}, 462 | address = {USA}, 463 | edition = {2nd}, 464 | abstract = {From the Publisher: Digital Image Processing has been the leading textbook in its field for more than 20 years. As was the case with the 1977 and 1987 editions by Gonzalez and Wintz, and the 1992 edition by Gonzalez and Woods, the present edition was prepared with students and instructors in mind. The material is timely, highly readable, and illustrated with numerous examples of practical significance. All mainstream areas of image processing are covered, including a totally revised introduction and discussion of image fundamentals, image enhancement in the spatial and frequency domains, restoration, color image processing, wavelets, image compression, morphology, segmentation, and image description. Coverage concludes with a discussion of the fundamentals of object recognition. Although the book is completely self-contained, a Companion Web Site (see inside front cover) provides additional support in the form of review material, answers to selected problems, laboratory project suggestions, and a score of other features. A supplementary instructor's manual is available to instructors who have adopted the book for classroom use. New Features New chapters on wavelets, image morphology, and color image processing. More than 500 new images and over 200 new line drawings and tables. A revision and update of all chapters, including topics such as segmentation by watersheds. Numerous new examples with processed images of higher resolution. A reorganization that allows the reader to get to the material on actual image processing much sooner than before. Updated image compression standards and a new section on compressionusing wavelets. A more intuitive development of traditional topics such as image transforms and image restoration. Updated bibliography.} 465 | } 466 | 467 | 468 | -------------------------------------------------------------------------------- /paper.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Off-resonance CorrecTion OPen soUrce Software (OCTOPUS)' 3 | tags: 4 | - Python 5 | - MRI 6 | - off-resonance correction 7 | authors: 8 | - name: Marina Manso-Jimeno 9 | orcid: 0000-0002-1141-2049 10 | affiliation: "1, 2" 11 | - name: John Thomas Vaughan Jr. 12 | orcid: 0000-0002-6933-3757 13 | affiliation: "1, 2" 14 | - name: Sairam Geethanath 15 | orcid: 0000-0002-3776-4114 16 | affiliation: 2 17 | 18 | affiliations: 19 | - name: Department of Biomedical Engineering, Columbia University in the City of New York, USA 20 | index: 1 21 | - name: Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, USA 22 | index: 2 23 | date: 15 OCTOBER 2020 24 | bibliography: paper.bib 25 | --- 26 | 27 | # Summary 28 | 29 | `OCTOPUS` is a Python-based software for correction of off-resonance 30 | artifacts in Magnetic Resonance (MR) images. It implements three different 31 | methods for correction of both Cartesian and non-Cartesian data: Conjugate Phase Reconstruction (CPR), 32 | frequency-segmented CPR and Multi-Frequency Interpolation(MFI). `OCTOPUS` is easy to integrate into other two and three-dimensional reconstruction pipelines, which makes the tool highly flexible 33 | and customizable. 34 | 35 | # Statement of need 36 | 37 | Off-resonance is an MR artifact which occurs due to field inhomogeneities, differences in tissue 38 | susceptibilities and chemical shift [@Noll1991]. These phenomena can cause the phase of off-resonant spins to accumulate along the 39 | read-out direction, which can turn into blurring, geometrical distortion 40 | and degradation in the reconstructed image [@LukPat2001]. Images 41 | acquired using long readout trajectories and/or at high fields where the 42 | field homogeneity is lower are more prone to this problem. However, 43 | such acquisition scenarios also deliver desirable properties, such as 44 | short scanning times, gradient efficiency, motion tolerance, and better 45 | signal-to-noise ratio [@Chen2008]. 46 | 47 | Multiple successful off-resonance correction methods have been reported 48 | in the literature [@Schomberg1999]. Most of them are based on Conjugate 49 | Phase Reconstruction (CPR), a method that counteracts the accumulated 50 | phase by demodulating k-space data with its conjugate [@Maeda1988]. 51 | Faster and more efficient implementations that the original CPR 52 | have been developed, such as frequency-segmented CPR [@Noll1992] and 53 | Multi-Frequency Interpolation (MFI) [@Man1997]. Frequency-segmented CPR reconstructs 54 | the corrected image by combining the pixels of "L" base images according to each pixel value on a field map. Each base image corresponds to the data demodulated at a fixed frequency, with 55 | the frequency values for each base image equally spaced within the field map frequency range. 56 | MFI works in a similar way as frequency-segmented CPR, with main differences being that it 57 | requires a smaller number of base images (L) and that these images are added together into the corrected image using a set of 58 | linear coefficients derived from the field map. 59 | 60 | One can find optimised off-resonance correction capabilities within 61 | existing packages. Examples are: SPIRiT [@Lustig2010], a MATLAB-based 62 | approach for auto-calibrated parallel imaging reconstruction; Ostenson's 63 | MFI implementation for Magnetic Resonance Fingerprinting (MRF) 64 | [@Ostenson2017]; FUGUE, a tool for Echo-Planar Imaging (EPI) distortion 65 | correction part of the FSL library [@Jenkinson2012]; and the MIRT 66 | toolbox, a MATLAB-based MRI reconstruction package that offers field 67 | inhomogeneity correction using iterative reconstruction 68 | methods [@Sutton2003; @Fessler2005]. Nylund's thesis [@Nylund2014] also 69 | contains source MATLAB code for fs-CPR and MFI correction of spiral 70 | images. 71 | 72 | All of these implementations are highly specific, defined for a 73 | particular k-space trajectory or application, and/or include a single 74 | correction method. SPIRiT is devoted to correct data acquired using 75 | parallel imaging methods; Ostenson's package only corrects MRF spiral data and implements 76 | only one correction method; and FUGUE corrects distortion solely on EPI images. These limitations typically lead researchers to 77 | adapt their data in an attempt to fit them into the available pipelines 78 | or to write their own version of the methods. Either approach results in 79 | a significant investment of time and effort and can generate isolated 80 | implementations and inconsistent results. Furthermore, most of the 81 | available packages are also MATLAB-based, which unlike Python, requires users to pay a license fee. 82 | 83 | `OCTOPUS` is aimed at filling this gap in MR off-resonance correction packages. It provides 84 | Python open-source code for three fundamental methods (CPR, fs-CPR, and 85 | MFI). The implementation is independent of the application and the image 86 | acquisition scheme, easing its integration into any reconstruction 87 | pipeline. `OCTOPUS` can also run in the browser through Google Colab, a freely hosted Jupyter notebook environment that allows one to execute Python code in the browser. 88 | Given this feature, `OCTOPUS` is the first zero-footprint off-resonance 89 | correction software, meaning it doesn't require software download, installation, or configuration on a user's local machine. 90 | 91 | # Functionality and limitations 92 | `OCTOPUS` is aimed at MR researchers working with long-readout or field-inhomogeneity sensitive k-space trajectories or 93 | MR acquisition methods. A short demo is provided in the next section. `OCTOPUS` corrects or reduces geometric distortion and/or blurring present in the images due to off-resonance effects by 94 | leveraging other Python libraries, specifically NumPy [@2020NumPy], SciPy [@2020SciPy], scikit-image [@scikit-image], 95 | NiBabel[@Nibabel], Matplotlib [@Matplotlib], OpenCV [@itseez2015opencv], Pydicom [@darcy_mason_2020_3891702], and PyNUFFT[@pynufft]. 96 | The expected output is an image with recovered, sharper edges and undistorted shape. 97 | 98 | Also, `OCTOPUS` corrects off-resonance independently of whether the trajectory used to acquire the data was Cartesian or non-Cartesian. 99 | The input of the correction methods can be either image or raw data. However, using raw data as input is more efficient 100 | and may avoid non Cartesian trajectory-dependent artifacts. `OCTOPUS` is also able to correct 3D multi-slice and multi-channel data by feeding it to the tool in a slice- and channel-wise manner and then applying channel combination with the user's method of choice. 101 | 102 | Presently, the software limitations include correction restricted to data acquired in the absence of 103 | acceleration techniques, long correction times for large datasets, and degraded correction quality in the presence of highly-inhomogeneous 104 | fields. Additionally, the tool has been only tested on Cartesian, EPI, and spiral data. 105 | 106 | # Short demo 107 | To illustrate the usage of the package, we performed in silico numerical 108 | simulations using a single-shot EPI trajectory, a single-shot spiral trajectory and a 109 | simulated field map. For these experiments we used a Shepp-Logan head phantom, which simulates a section of the skull and is widely used 110 | to test reconstruction algorithms [@Shepp1974]. Figure 1 shows all inputs and outputs of the experiment. The steps were: 111 | 112 | 1. Forward model simulation of off-resonance effect on a 128x128 113 | Shepp-Logan phantom and 256 mm2 FOV. 114 | 115 | + Using single-shot EPI and spiral trajectories. Figure 1 shows simplified versions of both trajectories for visualization purposes. 116 | 117 | + Using a simulated field map based on a blurred version of the phantom image with frequency ranges of -/+ 100, -/+150 and -/+200 Hz. 118 | 119 | 2. Correction of the results of the forward model with CPR, fs-CPR and MFI . 120 | 121 | ![Top row (left-right): Shepp-Logan phantom image (128x128), Simplified single-shot EPI k-space trajectory, Simplified single-shot spiral k-space trajectory, and simulated field map (128x128). Bottom row (left-right): EPI experiment results and Spiral experiment results.](JOSS_figs/simfig.png) 122 | 123 | In both experiments, 'OCTOPUS' successfully corrected the 124 | off-resonance induced blurring and/or geometrical distortion. Note how the EPI-corrupted images show geometric distortion in the phase-encode direction while spiral corrupted images show blurred and distorted edges. 125 | 126 | To test the effect of noise on the correction performance we introduced different levels of noise to a single-shot EPI trajectory-based simulation and measured the peak signal-to-noise ratio (pSNR) and Structural Similarity Index (SSIM). 127 | 128 | ![Effect of different noise leves on OCTOPUS correction performance measured using pSNR and SSIM.](JOSS_figs/noise_sim_epi.png) 129 | 130 | As expected, PSNR and SSIM are reduced as the off-resonance range widens and the noise level in the original image increases. Nevertheless, in all cases, the three implemented methods improve the metrics with respect to the off-resonance corrupted image. 131 | 132 | Finally, to demonstrate the correction capabilities in 3D multi-slice and multi-channel data, we corrected phantom images of a Stack-of-Spirals acquisition with matrix size of 72x72, FOV=240 mm2 and 54 slices. The images were acquired on a Siemens 3T Prisma scanner using a 20-channel head coil. Figure 3 shows three representative slices and their off-resonance corrected versions. The regions of the images highlighted in red show improved image quality and enhaced edges. 133 | 134 | ![Off-resonance correction of three slices of a Stack-of-Spirals 3D acquisition.](JOSS_figs/SoS_ORC_v2.png) 135 | 136 | # Acknowledgements 137 | 138 | This study was funded (in part) by the 'MR Technology Development Grant' 139 | and the 'Seed Grant Program for MR Studies' of the Zuckerman Mind Brain 140 | Behavior Institute at Columbia University (PI: Geethanath) and the 'Fast 141 | Functional MRI with sparse sampling and model-based reconstruction' of 142 | the National Institute of Biomedical Imaging and Bioengineering (PI: 143 | Fessler and, supplement, sub-award to Geethanath). 144 | 145 | # References 146 | 147 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | matplotlib>=3.1.3 2 | nibabel>=3.0.2 3 | numpy>=1.18.1 4 | opencv-python>=4.1.2.30 5 | pydicom>=2.0.0 6 | pynufft==2019.2.3 7 | scikit-image>=0.16.2 8 | scipy==1.4.1 -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | try: # Unicode decode error on Windows 4 | with open("README.md", "r") as fh: 5 | long_description = fh.read() 6 | except: 7 | long_description = 'Off-resonance correction of MR images' 8 | 9 | with open('requirements.txt', 'r') as f: 10 | install_reqs = f.read().strip() 11 | install_reqs = install_reqs.split("\n") 12 | 13 | setuptools.setup( 14 | name = 'MR-OCTOPUS', # How you named your package folder (MyLib) # Chose the same as "name" 15 | version = '0.2.7', # 0.2.1 for next stable release # Start with a small number and increase it with every change you make 16 | author = 'Marina Manso Jimeno', # Type in your name 17 | author_email = 'mm5290@columbia.edu', # Type in your E-Mail 18 | description = 'Off-resonance correction of MR images', # Give a short description about your library 19 | long_description = long_description, 20 | long_description_content_type = "text/markdown", 21 | url = 'https://github.com/imr-framework/OCTOPUS', # Provide either the link to your github or to your website 22 | packages = setuptools.find_packages(), 23 | include_package_data = True, 24 | install_requires = install_reqs, 25 | entry_points={ 26 | 'console_scripts': [ 27 | 'OCTOPUS_cli = OCTOPUS.prog:main' 28 | ] 29 | }, 30 | license = 'License :: OSI Approved :: GNU Affero General Public License v3', 31 | classifiers = [ 32 | "Programming Language :: Python :: 3.6", 33 | "Programming Language :: Python :: 3.7", 34 | "License :: OSI Approved :: GNU Affero General Public License v3", 35 | "Operating System :: OS Independent", 36 | ], 37 | ) -------------------------------------------------------------------------------- /tests/test_data/acrph_df.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/acrph_df.mat -------------------------------------------------------------------------------- /tests/test_data/acrph_df.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/acrph_df.npy -------------------------------------------------------------------------------- /tests/test_data/acrph_noncart_ksp.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/acrph_noncart_ksp.mat -------------------------------------------------------------------------------- /tests/test_data/ktraj_noncart.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/ktraj_noncart.npy -------------------------------------------------------------------------------- /tests/test_data/ktraj_noncart_dcf.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/ktraj_noncart_dcf.npy -------------------------------------------------------------------------------- /tests/test_data/sl_ph.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/sl_ph.mat -------------------------------------------------------------------------------- /tests/test_data/slph_cart_ksp.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/slph_cart_ksp.npy -------------------------------------------------------------------------------- /tests/test_data/slph_im.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/slph_im.npy -------------------------------------------------------------------------------- /tests/test_data/slph_noncart_ksp.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/imr-framework/OCTOPUS/90414cd3609e2267c143384a5df991657c503e64/tests/test_data/slph_noncart_ksp.npy -------------------------------------------------------------------------------- /tests/utests_ORC_methods.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | import unittest 3 | 4 | import scipy.io as sio 5 | import numpy as np 6 | 7 | import OCTOPUS.ORC as ORC 8 | 9 | from OCTOPUS.fieldmap.simulate import fieldmap_bin 10 | 11 | # Inputs 12 | raw_data = sio.loadmat('test_data/acrph_noncart_ksp.mat')['dat'] 13 | ktraj = np.load('test_data/ktraj_noncart.npy') 14 | ktraj_dcf = np.load('test_data/ktraj_noncart_dcf.npy').flatten() 15 | t_ro = ktraj.shape[0] * 10e-6 16 | T = np.linspace(4.6e-3, 4.6e-3 + t_ro, ktraj.shape[0]).reshape((ktraj.shape[0], 1)) 17 | 18 | field_map = np.load('test_data/acrph_df.npy') 19 | field_map_CPR = fieldmap_bin(field_map, 5) 20 | acq_params = {'Npoints': ktraj.shape[0], 'Nshots': ktraj.shape[1], 'N': field_map.shape[0], 'dcf': ktraj_dcf, 't_vector': T, 't_readout': t_ro} 21 | 22 | 23 | class TestfscPR(unittest.TestCase): 24 | def test_fsCPR_given_rawData(self): 25 | # Test correction given raw data 26 | or_corr_im = ORC.fs_CPR(dataIn=raw_data[:,:,0], dataInType='raw', kt=ktraj, df=field_map, Lx=1, nonCart=1, params=acq_params) 27 | self.assertEqual(or_corr_im.shape, field_map.shape[:-1]) # Dimensions agree 28 | 29 | def test_fsCPR_given_im(self): 30 | # Test correction given image data 31 | or_corr_im = ORC.fs_CPR(dataIn=raw_data[:,:,0],dataInType='raw',kt=ktraj,df=field_map,Lx=1, nonCart=1, params=acq_params) 32 | or_corr_im2 = ORC.fs_CPR(dataIn=or_corr_im,dataInType='im',kt=ktraj,df=np.squeeze(field_map),Lx=1,nonCart=1,params=acq_params) 33 | self.assertEqual(or_corr_im2.shape, field_map.shape[:-1]) # Dimensions match 34 | self.assertEqual(or_corr_im2.all(), or_corr_im.all()) 35 | 36 | def test_fsCPR_wrongtype(self): 37 | # Error when dataIn is raw data but dataInType is 'im' 38 | with self.assertRaises(ValueError): ORC.fs_CPR(dataIn=raw_data[:, :, 0], dataInType='im', kt=ktraj, df=field_map, Lx=1, nonCart=1, params=acq_params) 39 | or_corr_im = ORC.fs_CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, df=field_map, Lx=1, 40 | nonCart=1, params=acq_params) 41 | # Error when dataIn is image data but dataInType is 'raw' 42 | with self.assertRaises(ValueError): ORC.fs_CPR(dataIn=or_corr_im, dataInType='raw', kt=ktraj, 43 | df=field_map, Lx=1, nonCart=1, params=acq_params) 44 | # Error when dataInType is other than 'raw' or 'im' 45 | with self.assertRaises(ValueError): ORC.fs_CPR(dataIn=raw_data[:, :, 0], dataInType='other', kt=ktraj, 46 | df=field_map, Lx=1, nonCart=1, params=acq_params) 47 | 48 | def test_find_nearest(self): 49 | # test that find_nearest works for positive and negative values 50 | array = np.linspace(-10, 10, 21) 51 | neg_val = -5 52 | pos_val = 5 53 | idx_neg = ORC.find_nearest(array, neg_val) 54 | idx_pos = ORC.find_nearest(array, pos_val) 55 | self.assertEqual(array[idx_neg], neg_val) 56 | self.assertEqual(array[idx_pos], pos_val) 57 | 58 | class TestMFI(unittest.TestCase): 59 | def test_MFI_given_rawData(self): 60 | # Test correction given raw data 61 | or_corr_im = ORC.MFI(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, df=field_map[:,:,0], Lx=1, 62 | nonCart=1, params=acq_params) 63 | self.assertEqual(or_corr_im.shape, field_map.shape[:-1]) # Dimensions agree 64 | 65 | def test_MFI_given_im(self): 66 | # Test correction given image data 67 | or_corr_im = ORC.MFI(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, df=field_map[:,:,0], Lx=1, 68 | nonCart=1, params=acq_params) 69 | or_corr_im2 = ORC.MFI(dataIn=or_corr_im, dataInType='im', kt=ktraj, df=field_map[:,:,0], Lx=1, nonCart=1, params=acq_params) 70 | self.assertEqual(or_corr_im2.shape, field_map.shape[:-1]) 71 | self.assertEqual(or_corr_im2.all(), or_corr_im.all()) 72 | 73 | def test_MFI_wrongtype(self): 74 | # Error when dataIn is raw data but dataInType is 'im' 75 | with self.assertRaises(ValueError): ORC.MFI(dataIn=raw_data[:, :, 0], dataInType='im', kt=ktraj, 76 | df=field_map, Lx=1, nonCart=1, params=acq_params) 77 | or_corr_im = ORC.MFI(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, df=field_map[:,:,0], Lx=1, nonCart=1, 78 | params=acq_params) 79 | # Error when dataIn is image data but dataInType is 'raw' 80 | with self.assertRaises(ValueError): ORC.MFI(dataIn=or_corr_im, dataInType='raw', kt=ktraj, 81 | df=field_map, Lx=1, nonCart=1, params=acq_params) 82 | # Error when dataInType is other than 'raw' or 'im' 83 | with self.assertRaises(ValueError): ORC.MFI(dataIn=raw_data[:, :, 0], dataInType='other', kt=ktraj, 84 | df=field_map, Lx=1, nonCart=1, params=acq_params) 85 | 86 | 87 | class testCPR(unittest.TestCase): 88 | def test_CPR_given_rawData(self): 89 | # Test correction given raw data 90 | or_corr_im = ORC.CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, df=field_map_CPR[:,:,0], 91 | nonCart=1, params=acq_params) 92 | self.assertEqual(or_corr_im.shape, field_map.shape[:-1]) # Dimensions agree 93 | 94 | def test_CPR_given_im(self): 95 | # Test correction given image data 96 | or_corr_im = ORC.CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, df=field_map_CPR[:,:,0], 97 | nonCart=1, params=acq_params) 98 | or_corr_im2 = ORC.CPR(dataIn=or_corr_im, dataInType='im', kt=ktraj, df=field_map_CPR[:,:,0], nonCart=1, params=acq_params) 99 | self.assertEqual(or_corr_im2.shape, field_map.shape[:-1]) 100 | self.assertEqual(or_corr_im2.all(), or_corr_im.all()) 101 | 102 | def test_CPR_wrongtype(self): 103 | # Error when dataIn is raw data but dataInType is 'im' 104 | with self.assertRaises(ValueError): ORC.CPR(dataIn=raw_data[:, :, 0], dataInType='im', kt=ktraj, 105 | df=field_map_CPR, nonCart=1, params=acq_params) 106 | or_corr_im = ORC.CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, df=field_map_CPR[:,:,0], nonCart=1, 107 | params=acq_params) 108 | # Error when dataIn is image data but dataInType is 'raw' 109 | with self.assertRaises(ValueError): ORC.CPR(dataIn=or_corr_im, dataInType='raw', kt=ktraj, 110 | df=field_map_CPR, nonCart=1, params=acq_params) 111 | # Error when dataInType is other than 'raw' or 'im' 112 | with self.assertRaises(ValueError): ORC.CPR(dataIn=raw_data[:, :, 0], dataInType='other', kt=ktraj, 113 | df=field_map_CPR, nonCart=1, params=acq_params) 114 | 115 | def test_CPR_wrongInputDimensions_imdata(self): 116 | # Image data is not NxN 117 | data = np.ones((acq_params['N'],acq_params['N']+1)) 118 | with self.assertRaises(ValueError): ORC.CPR(dataIn=data, dataInType='im', kt=ktraj, 119 | df=field_map_CPR, nonCart=1, params=acq_params) # non-Cartesian 120 | with self.assertRaises(ValueError): ORC.CPR(dataIn=data, dataInType='im', kt=ktraj, 121 | df=field_map_CPR) # cartesian 122 | 123 | def test_CPR_wrongInputDimensions_rawdata(self): 124 | # raw data dimensions do not match ktraj dimensions 125 | data = np.ones((acq_params['N'], acq_params['N'] + 1)) 126 | with self.assertRaises(ValueError): ORC.CPR(dataIn=data, dataInType='raw', kt=ktraj, 127 | df=field_map_CPR) # cartesian 128 | 129 | 130 | class testParametersDictionary(unittest.TestCase): 131 | def test_N(self): 132 | # Test that an error is raised if N specified in the parameters dictionary does not match the image dimensions 133 | params_dict = acq_params.copy() 134 | params_dict['N'] = 1 135 | with self.assertRaises(ValueError): ORC.CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, 136 | df=field_map_CPR, nonCart=1, params=params_dict) 137 | with self.assertRaises(ValueError): ORC.fs_CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, 138 | df=field_map_CPR, Lx=1, nonCart=1, params=params_dict) 139 | with self.assertRaises(ValueError): ORC.MFI(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, 140 | df=field_map_CPR, Lx=1, nonCart=1, params=params_dict) 141 | 142 | def test_Npoints(self): 143 | params_dict = acq_params.copy() 144 | params_dict['Npoints'] = 10 145 | with self.assertRaises(ValueError): ORC.CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, 146 | df=field_map_CPR, nonCart=1, params=params_dict) 147 | with self.assertRaises(ValueError): ORC.fs_CPR(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, 148 | df=field_map_CPR, Lx=1, nonCart=1, params=params_dict) 149 | with self.assertRaises(ValueError): ORC.MFI(dataIn=raw_data[:, :, 0], dataInType='raw', kt=ktraj, 150 | df=field_map_CPR, Lx=1, nonCart=1, params=params_dict) 151 | 152 | 153 | 154 | 155 | 156 | if __name__ == "__main__": 157 | unittest.main() -------------------------------------------------------------------------------- /tests/utests_imtransform.py: -------------------------------------------------------------------------------- 1 | # Copyright of the Board of Trustees of Columbia University in the City of New York 2 | import unittest 3 | import matplotlib.pyplot as plt 4 | import numpy as np 5 | 6 | from skimage.data import shepp_logan_phantom 7 | from skimage.transform import resize 8 | 9 | from OCTOPUS.recon.imtransforms import im2ksp, ksp2im, nufft_init 10 | 11 | # Cartesian data 12 | N = 192 13 | ph = resize(shepp_logan_phantom(), (N,N)).astype(complex) 14 | cart_ksp = np.fft.fftshift(np.fft.fft2(ph)) 15 | 16 | # Non-Cartesian data 17 | ktraj = np.load('test_data/ktraj_noncart.npy') 18 | ktraj_dcf = np.load('test_data/ktraj_noncart_dcf.npy').flatten() 19 | 20 | noncart_ksp = np.load('test_data/slph_noncart_ksp.npy') 21 | acq_params = {'Npoints': ktraj.shape[0], 'Nshots': ktraj.shape[1], 'N': ph.shape[0],'dcf': ktraj_dcf} 22 | 23 | 24 | class TestImageTransformation(unittest.TestCase): 25 | def test_im2ksp_cart(self): 26 | # Image to k-space transformation for Cartesian data 27 | ksp = im2ksp(M=ph, cartesian_opt=1) 28 | self.assertEqual(ksp.shape, ph.shape) # Dimensions agree 29 | self.assertEqual(((ksp - cart_ksp) == np.zeros(ksp.shape).astype(complex)).all(), True) # Transformation is correct 30 | 31 | def test_ksp2im_cart(self): 32 | # K-space to image transformation for Cartesian data 33 | im = ksp2im(ksp=cart_ksp, cartesian_opt=1) 34 | self.assertEqual(im.shape, cart_ksp.shape) # Dimensions agree 35 | self.assertEqual((np.round(im - ph) == np.zeros(im.shape).astype(complex)).all(), True) # Transformation is correct 36 | 37 | def test_im2ksp_noncart(self): 38 | # Image to k-space transformation for non-Cartesian data 39 | nufft_obj = nufft_init(kt=ktraj, params=acq_params) 40 | ksp = im2ksp(M=ph, cartesian_opt=0, NufftObj=nufft_obj, params=acq_params) # Sample the image along the trajectory 41 | self.assertEqual(ksp.shape, ktraj.shape) # Dimensions match 42 | 43 | def test_ksp2im_noncart(self): 44 | # K-space to image transformation for non-Cartesian 45 | nufft_obj = nufft_init(kt=ktraj, params=acq_params) 46 | im = ksp2im(ksp=noncart_ksp, cartesian_opt=0, NufftObj=nufft_obj, params=acq_params) # reconstructed image 47 | self.assertEqual(im.shape, ph.shape) # dimensions match 48 | im = (im - np.min(im)) / (np.max(im) - np.min(im)) # normalize image 49 | diff = np.round(im - ph) 50 | diff_percent = np.nonzero(diff)[0].shape[0] / N**2 * 100 51 | self.assertLess(diff_percent, 5) # Less than 5% of voxels should be different between original and reconstructed image 52 | 53 | class TestRaiseErrors(unittest.TestCase): 54 | # Test that error messages are raised when given invalid inputs 55 | def test_cartesian_opt(self): 56 | # cartesian_opt can only be 0 or 1, otherwise error 57 | with self.assertRaises(ValueError): im2ksp(M=ph, cartesian_opt=5) 58 | with self.assertRaises(ValueError): ksp2im(ksp=cart_ksp, cartesian_opt=100) 59 | 60 | def test_missing_params_Nshots(self): 61 | # NUFFT for non-cartesian data will fail if a required parameter is missing from the paramters dictionary (e.g. Nshots) 62 | params = acq_params.copy() 63 | del params['Nshots'] 64 | with self.assertRaises(ValueError): nufft_init(kt=ktraj, params=params ) 65 | 66 | def test_missing_params_Npoints(self): 67 | # NUFFT for non-cartesian data will fail if a required parameter is missing from the paramters dictionary (e.g. Npoints) 68 | params = acq_params.copy() 69 | del params['Npoints'] 70 | with self.assertRaises(ValueError): nufft_init(kt=ktraj, params=params) 71 | 72 | def test_missing_params_N(self): 73 | # NUFFT for non-cartesian data will fail if a required parameter is missing from the paramters dictionary (e.g. N) 74 | params = acq_params.copy() 75 | del params['N'] 76 | with self.assertRaises(ValueError): nufft_init(kt=ktraj, params=params) 77 | 78 | 79 | if __name__ == "__main__": 80 | unittest.main() --------------------------------------------------------------------------------