├── LICENSE ├── README.md ├── account.json ├── chain.json ├── deployed.json ├── docker_keras_cpu ├── Dockerfile ├── Dockerfile_MNIST ├── maskrcnn.py └── mnist_cnn.py ├── frontend ├── .babelrc ├── README.md ├── docs │ ├── build.js │ ├── build.js.map │ └── index.html ├── index.html ├── package.json ├── public │ ├── favicon-32x32.png │ └── v.png ├── src │ ├── App.vue │ ├── chains.js │ ├── components │ │ ├── Category.vue │ │ ├── Orders.vue │ │ ├── Work.vue │ │ └── Works.vue │ ├── main.js │ └── utils │ │ └── extensions.js ├── webpack.config.js └── yarn.lock ├── iexec.json ├── img ├── 20180604_143926.png ├── Alex-rd.png ├── Jeddi-rd168.png ├── Mattew-rd168.png ├── Screenshot_from_2018-06-09_14-55-45.png ├── architecture_1.png ├── architecture_2.png ├── ben-rd168.png ├── front-work.jpg ├── front_ai2.jpg ├── front_preview.png ├── iexec-team-MRCNN.png └── iexec-team.jpeg ├── openmined ├── .ipynb_checkpoints │ └── Distributed_AI_OpenMined_Tests-checkpoint.ipynb ├── OpenMined_Tests.ipynb ├── README.md ├── Running_Grid.py └── pytorch-mask-rcnn-master │ ├── .gitignore │ ├── LICENSE │ ├── README.md │ ├── assets │ ├── detection_anchors.png │ ├── detection_final.png │ ├── detection_masks.png │ ├── detection_refinement.png │ ├── park.png │ └── street.png │ ├── coco.py │ ├── config.py │ ├── convert_from_keras.py │ ├── demo.py │ ├── images │ ├── 1045023827_4ec3e8ba5c_z.jpg │ ├── 12283150_12d37e6389_z.jpg │ ├── 2383514521_1fc8d7b0de_z.jpg │ ├── 2502287818_41e4b0c4fb_z.jpg │ ├── 2516944023_d00345997d_z.jpg │ ├── 25691390_f9944f61b5_z.jpg │ ├── 262985539_1709e54576_z.jpg │ ├── 3132016470_c27baa00e8_z.jpg │ ├── 3627527276_6fe8cd9bfe_z.jpg │ ├── 3651581213_f81963d1dd_z.jpg │ ├── 3800883468_12af3c0b50_z.jpg │ ├── 3862500489_6fd195d183_z.jpg │ ├── 3878153025_8fde829928_z.jpg │ ├── 4410436637_7b0ca36ee7_z.jpg │ ├── 4782628554_668bc31826_z.jpg │ ├── 5951960966_d4e1cda5d0_z.jpg │ ├── 6584515005_fce9cec486_z.jpg │ ├── 6821351586_59aa0dc110_z.jpg │ ├── 7581246086_cf7bbb7255_z.jpg │ ├── 7933423348_c30bd9bd4e_z.jpg │ ├── 8053677163_d4c8f416be_z.jpg │ ├── 8239308689_efa6c11b08_z.jpg │ ├── 8433365521_9252889f9a_z.jpg │ ├── 8512296263_5fc5458e20_z.jpg │ ├── 8699757338_c3941051b6_z.jpg │ ├── 8734543718_37f6b8bd45_z.jpg │ ├── 8829708882_48f263491e_z.jpg │ ├── 9118579087_f9ffa19e63_z.jpg │ └── 9247489789_132c0d534a_z.jpg │ ├── model.py │ ├── nms │ ├── __init__.py │ ├── build.py │ ├── nms_wrapper.py │ ├── pth_nms.py │ └── src │ │ ├── cuda │ │ ├── nms_kernel.cu │ │ └── nms_kernel.h │ │ ├── nms.c │ │ ├── nms.h │ │ ├── nms_cuda.c │ │ └── nms_cuda.h │ ├── roialign │ ├── __init__.py │ └── roi_align │ │ ├── __init__.py │ │ ├── build.py │ │ ├── crop_and_resize.py │ │ ├── roi_align.py │ │ └── src │ │ ├── crop_and_resize.c │ │ ├── crop_and_resize.h │ │ ├── crop_and_resize_gpu.c │ │ ├── crop_and_resize_gpu.h │ │ └── cuda │ │ ├── crop_and_resize_kernel.cu │ │ └── crop_and_resize_kernel.h │ ├── utils.py │ └── visualize.py ├── wallet.json └── whitepaper └── whitepaper.md /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | 294 | Copyright (C) 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | , 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. 340 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Decentralized AI (final project for Siraj's School of AI) 2 | 3 | ## Authors 4 | Benoit Courty, Matthew McAteer, Alexandre Moreau and Jeddi Mees 5 | 6 | For more background info read our [Whitepaper](https://github.com/trancept/decentralized_AI/blob/master/whitepaper/whitepaper.md). 7 | 8 | This is our try at building a "Decentralized AI". Well it is just a semantic segmentation task that run in a decentralized fashion : The task is done on a machine in the internet, like in a proprietary cloud, but on a decentralized cloud : you do not have to create an account with the computer owner. All is handle by iExec. 9 | 10 | The semantic segmentation is done by the [Mask RCNN](https://github.com/matterport/Mask_RCNN) project trained on the [COCO Dataset](http://cocodataset.org/). 11 | 12 | Submit an image : 13 | ![FrontUI](https://raw.githubusercontent.com/trancept/decentralized_AI/master/img/front_ai2.jpg) 14 | 15 | Get the result in the work tab : 16 | ![FrontUIWork](https://raw.githubusercontent.com/trancept/decentralized_AI/master/img/front-work.jpg) 17 | 18 | And you are done : 19 | ![SampleResult](https://raw.githubusercontent.com/trancept/decentralized_AI/master/img/iexec-team-MRCNN.png) 20 | 21 | 22 | Other sample : 23 | ![MASK_R-CNN](https://github.com/trancept/decentralized_AI/blob/master/img/20180604_143926.png) 24 | 25 | The Docker image was based on the Modern Deep-learning container from [Waleed Abdulla](https://hub.docker.com/r/waleedka/modern-deep-learning/), with the Mask RCNN added into it along with a modified version of the demo packaged for iExec. 26 | 27 | [iExec is a whole ecosystem with a market-place for DApps, Oracle mechanism, scheduler, workers](https://cdn-images-1.medium.com/max/1200/1*iiERfyS1iqvVXNCXFrghfA.jpeg),... Dedicated to off-chain computing in a fully decentralized way. 28 | 29 | The V2 is just out (speaking from 1st of June 2018). 30 | 31 | [iExec SDK](https://github.com/iExecBlockchainComputing/iexec-sdk) is a NodeJS application who allow to easily create and manage your application. 32 | 33 | The result is that you can call it quite like an API to get your resulting image : 34 | 35 | # How to Run 36 | ## iExec Front Side 37 | 38 | You could use it on the browser : http://nrxubuntu.eastus2.cloudapp.azure.com/ 39 | 40 | Get ETH and RLC for Kovan : 41 | . Connect to Metamask and switch to Kovan Ethereum test network. Ask for free ETH on [Kovan faucet](https://gitter.im/kovan-testnet) and for free RLC on [iExec marketplace](https://market.iex.ec/), then transfert RLC from your wallet to your "account" (on top left of [iExec marketplace](https://market.iex.ec/)) 42 | 43 | Build it from source : 44 | ``` 45 | cd frontend 46 | npm install 47 | npm run dev 48 | ``` 49 | Your browser will automatically go to localhost:8081 so you can access the frontend. 50 | . Choose an image from your harddisk or copy-past an url. 51 | . Choose a worker in the list on the right. 52 | . Click on "iExec" button. 53 | 54 | ## OpenMined Side 55 | 56 | In [openmined](https://github.com/trancept/decentralized_AI/tree/master/openmined) folder you will find a demo of how to use Open Mined to train a model using decentralized grid computing capabilities. 57 | 58 | # How we make it 59 | 60 | ## Building the Docker Image 61 | 62 | ``` 63 | docker build docker_keras_cpu/ -t trancept/keras_mrcnn:v0 64 | docker run -v $(pwd):/iexec trancept/keras_mrcnn:v0 http://fr.ubergizmo.com/wp-content/uploads/2017/11/nouvel-algorithme-correction-panoramas-google-street-view.jpg 65 | docker push trancept/keras_mrcnn:v0 66 | ``` 67 | 68 | 69 | ## iExec project 70 | 71 | ``` 72 | # Init project 73 | 74 | # Get money 75 | iexec wallet getRLC 76 | => For ETH, on Kovan you have to go to ask for it on [Kovan faucet](https://gitter.im/kovan-testnet) 77 | # Check your wallet 78 | iexec wallet show 79 | => You need to have ETH and RLC 80 | # Send money to the iExec account/Marketplace to use it 81 | iexec account deposit 100 82 | # Check money 83 | iexec account show 84 | ``` 85 | 86 | ### Deploy 87 | 88 | Adding docker image to iExec : 89 | - Edit iexec.js 90 | - Run : 91 | ``` 92 | iexec app deploy 93 | iexec app show 94 | ``` 95 | Prepare order 96 | 97 | ``` 98 | iexec order init --buy 99 | ``` 100 | *Important* : You have to edit iexec.json at these step to edit the "params" string to match the parameters you want to send to the job. 101 | 102 | 103 | ### How to execute iExec Dapp 104 | 105 | #### Easiest way 106 | 107 | The easiest way is to go to https://market.iex.ec/ and place a Buy order with : 108 | - An available sell order ID 109 | - Dapp Address: 0xc790D024Ec41a7649E7a0590e4AE05891fA61ef8 110 | - Work Params: {"cmdline":"https://storage.canalblog.com/78/32/802934/60160490.jpg"} 111 |   112 | #### Command line way 113 | 114 | . Clone the repository 115 | . Change the image url in iexec.json 116 | . run 117 | 118 | You have to initiate an order to buy computing ressource, then find one available, then buy it ! 119 | 120 | ``` 121 | # Show available computing ressource 122 | iexec orderbook show --category 3 123 | # Check a ressource 124 | iexec order show 170 125 | # Buy the ressource 126 | iexec order fill 170 127 | # Check the status 128 | iexec work show 0xfda65e0d09bf434ea1e52f4ec044a07d6e7d592d --watch --download 129 | ``` 130 | -------------------------------------------------------------------------------- /account.json: -------------------------------------------------------------------------------- 1 | { 2 | "jwtoken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJibG9ja2NoYWluYWRkciI6IjB4OWJjODEzYzlkMWNlNzgzYTM3ODQxY2E3NzUxMzU5YjM1YTlhZDc2MyIsImlzcyI6Inh3aGVwMTEiLCJpYXQiOjE1Mjc3MDk3ODV9.i48Tf4wtRFrwee4mGHwH83fbYApP_m0zMnTkSiO_4xQ" 3 | } -------------------------------------------------------------------------------- /chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "chains": { 3 | "dev": { 4 | "host": "http://localhost:8545", 5 | "id": "1337", 6 | "server": "https://localhost:443", 7 | "hub": "0xc4e4a08bf4c6fd11028b714038846006e27d7be8" 8 | }, 9 | "ropsten": { 10 | "host": "https://ropsten.infura.io/berv5GTB5cSdOJPPnqOq", 11 | "id": "3", 12 | "server": "https://testxw.iex.ec:443" 13 | }, 14 | "rinkeby": { 15 | "host": "https://rinkeby.infura.io/berv5GTB5cSdOJPPnqOq", 16 | "id": "4", 17 | "server": "https://testxw.iex.ec:443" 18 | }, 19 | "kovan": { 20 | "host": "https://kovan.infura.io/berv5GTB5cSdOJPPnqOq", 21 | "id": "42", 22 | "server": "https://testxw.iex.ec:443" 23 | }, 24 | "mainnet": { 25 | "host": "https://mainnet.infura.io/berv5GTB5cSdOJPPnqOq ", 26 | "id": "1", 27 | "server": "https://mainxw.iex.ec:443" 28 | } 29 | } 30 | } -------------------------------------------------------------------------------- /deployed.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": { 3 | "42": "0x4c86e3fbfb99a77489062f0fe04bcbf910c85973" 4 | }, 5 | "work": { 6 | "42": "0x2cf541c705faa1ab3c8db136115128d0282a2ffc" 7 | } 8 | } -------------------------------------------------------------------------------- /docker_keras_cpu/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | # Based on Waleed Abdulla work 3 | MAINTAINER Benoit Courty 4 | 5 | # Supress warnings about missing front-end. As recommended at: 6 | # http://stackoverflow.com/questions/22466255/is-it-possibe-to-answer-dialog-questions-when-installing-under-docker 7 | ARG DEBIAN_FRONTEND=noninteractive 8 | 9 | # Essentials: developer tools, build tools, OpenBLAS 10 | RUN apt-get update && apt-get install -y --no-install-recommends \ 11 | apt-utils git curl vim unzip openssh-client wget \ 12 | build-essential cmake \ 13 | libopenblas-dev && apt-get clean -y 14 | 15 | # 16 | # Python 3.5 17 | # 18 | # For convenience, alias (but don't sym-link) python & pip to python3 & pip3 as recommended in: 19 | # http://askubuntu.com/questions/351318/changing-symlink-python-to-python3-causes-problems 20 | RUN apt-get install -y --no-install-recommends python3.5 python3.5-dev python3-pip python3-tk && apt-get clean -y && \ 21 | pip3 install --no-cache-dir --upgrade pip setuptools && \ 22 | echo "alias python='python3'" >> /root/.bash_aliases && \ 23 | echo "alias pip='pip3'" >> /root/.bash_aliases 24 | # Pillow and it's dependencies 25 | RUN apt-get install -y --no-install-recommends libjpeg-dev zlib1g-dev && \ 26 | pip3 --no-cache-dir install Pillow 27 | # Science libraries and other common packages 28 | RUN pip3 --no-cache-dir install \ 29 | numpy scipy scikit-image matplotlib Cython 30 | #sklearn requests pandas 31 | 32 | # 33 | # Tensorflow 1.6.0 - CPU 34 | # 35 | RUN pip3 install --no-cache-dir --upgrade tensorflow 36 | 37 | # Expose port for TensorBoard 38 | #EXPOSE 6006 39 | 40 | # 41 | # OpenCV 3.4.1 42 | # 43 | # Dependencies 44 | RUN apt-get install -y --no-install-recommends \ 45 | libjpeg8-dev libtiff5-dev libjasper-dev libpng12-dev \ 46 | libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk2.0-dev \ 47 | liblapacke-dev checkinstall 48 | # Get source from github 49 | RUN git clone -b 3.4.1 --depth 1 https://github.com/opencv/opencv.git /usr/local/src/opencv 50 | # Compile 51 | RUN cd /usr/local/src/opencv && mkdir build && cd build && \ 52 | cmake -D CMAKE_INSTALL_PREFIX=/usr/local \ 53 | -D BUILD_TESTS=OFF \ 54 | -D BUILD_PERF_TESTS=OFF \ 55 | -D PYTHON_DEFAULT_EXECUTABLE=$(which python3) \ 56 | .. && \ 57 | make -j"$(nproc)" && \ 58 | make install && \ 59 | cd .. && rm -rf /usr/local/src/opencv 60 | 61 | # 62 | # Keras 2.1.5 63 | # 64 | RUN pip3 install --no-cache-dir --upgrade h5py pydot_ng keras 65 | 66 | # 67 | # PyCocoTools 68 | # 69 | # Using a fork of the original that has a fix for Python 3. 70 | # I submitted a PR to the original repo (https://github.com/cocodataset/cocoapi/pull/50) 71 | # but it doesn't seem to be active anymore. 72 | RUN pip3 install --no-cache-dir git+https://github.com/waleedka/coco.git#subdirectory=PythonAPI 73 | 74 | RUN pip3 --no-cache-dir install imgaug IPython 75 | 76 | # Set matplotlib to headless mode 77 | RUN mkdir -p /root/.config/matplotlib/ && echo "backend : Agg" > /root/.config/matplotlib/matplotlibrc 78 | 79 | # Install Mask_RCNN 80 | #RUN git clone --depth 1 https://github.com/matterport/Mask_RCNN.git /usr/local/src/Mask_RCNN 81 | RUN git clone https://github.com/gtgalone/Mask_RCNN.git /usr/local/src/Mask_RCNN && \ 82 | cd /usr/local/src/Mask_RCNN && git checkout fix-keras-engine-topology && rm -rf /usr/local/src/Mask_RCNN/.git && \ 83 | cd /usr/local/src/Mask_RCNN && $(which python3) setup.py install 84 | 85 | 86 | 87 | RUN wget --quiet https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5 -O /usr/local/src/Mask_RCNN/mask_rcnn_coco.h5 88 | 89 | # Clean : useless as Docker will kept original files in history 90 | #RUN rm -rf /usr/local/src/opencv 91 | # /build 92 | # && make clean 93 | RUN apt-get purge -y --no-install-recommends \ 94 | apt-utils git curl vim unzip openssh-client wget \ 95 | build-essential cmake \ 96 | libopenblas-dev 97 | #RUN rm -rf /usr/local/src/Mask_RCNN/.git 98 | 99 | WORKDIR "/usr/local/src/Mask_RCNN/samples" 100 | ADD maskrcnn.py /usr/local/src/Mask_RCNN/samples 101 | # CMD ["/bin/bash"] 102 | ENTRYPOINT ["/bin/bash", "-c", "/usr/local/src/Mask_RCNN/samples/maskrcnn.py \"$@\"", "--"] 103 | -------------------------------------------------------------------------------- /docker_keras_cpu/Dockerfile_MNIST: -------------------------------------------------------------------------------- 1 | # docker-keras - Keras in Docker with Python 3 and TensorFlow on CPU 2 | 3 | FROM debian:stretch 4 | MAINTAINER gw0 [http://gw.tnode.com/] 5 | # Dockerfile is based on work from https://github.com/gw0/docker-keras , thanks to him. 6 | 7 | # install debian packages 8 | ENV DEBIAN_FRONTEND noninteractive 9 | RUN apt-get update -qq \ 10 | && apt-get install --no-install-recommends -y \ 11 | # install essentials 12 | build-essential \ 13 | g++ \ 14 | git \ 15 | openssh-client \ 16 | # install python 3 17 | python3 \ 18 | python3-dev \ 19 | python3-pip \ 20 | python3-setuptools \ 21 | python3-virtualenv \ 22 | python3-wheel \ 23 | pkg-config \ 24 | # requirements for numpy 25 | libopenblas-base \ 26 | python3-numpy \ 27 | python3-scipy \ 28 | # requirements for keras 29 | python3-h5py \ 30 | python3-yaml \ 31 | python3-pydot \ 32 | && apt-get clean \ 33 | && rm -rf /var/lib/apt/lists/* 34 | 35 | # manually update numpy 36 | RUN pip3 --no-cache-dir install -U numpy==1.13.3 37 | 38 | ARG TENSORFLOW_VERSION=1.5.0 39 | ARG TENSORFLOW_DEVICE=cpu 40 | ARG TENSORFLOW_APPEND= 41 | RUN pip3 --no-cache-dir install https://storage.googleapis.com/tensorflow/linux/${TENSORFLOW_DEVICE}/tensorflow${TENSORFLOW_APPEND}-${TENSORFLOW_VERSION}-cp35-cp35m-linux_x86_64.whl 42 | 43 | ARG KERAS_VERSION=2.1.4 44 | ENV KERAS_BACKEND=tensorflow 45 | RUN pip3 --no-cache-dir install --no-dependencies git+https://github.com/fchollet/keras.git@${KERAS_VERSION} 46 | 47 | # quick test and dump package lists 48 | RUN python3 -c "import tensorflow; print(tensorflow.__version__)" \ 49 | && dpkg-query -l > /dpkg-query-l.txt \ 50 | && pip3 freeze > /pip3-freeze.txt 51 | 52 | WORKDIR /srv/ 53 | ADD mnist_cnn.py /usr/local/bin/ 54 | ENTRYPOINT ["/bin/bash", "-c", "/usr/local/bin/mnist_cnn.py \"$@\"", "--"] 55 | -------------------------------------------------------------------------------- /docker_keras_cpu/maskrcnn.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | import os 3 | import sys 4 | import random 5 | import math 6 | import numpy as np 7 | import skimage.io 8 | import matplotlib 9 | matplotlib.use('Agg') 10 | import matplotlib.pyplot as plt 11 | from datetime import datetime 12 | import urllib.request 13 | 14 | # Force no GPU 15 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 16 | os.environ["CUDA_VISIBLE_DEVICES"] = "" 17 | 18 | # Root directory of the project 19 | ROOT_DIR = os.path.abspath("../") 20 | 21 | # Import Mask RCNN 22 | sys.path.append(ROOT_DIR) # To find local version of the library 23 | from mrcnn import utils 24 | import mrcnn.model as modellib 25 | from mrcnn import visualize 26 | # Import COCO config 27 | sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version 28 | import coco 29 | 30 | #%matplotlib inline 31 | 32 | # Directory to save logs and trained model 33 | MODEL_DIR = os.path.join(ROOT_DIR, "logs") 34 | 35 | # Local path to trained weights file 36 | COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") 37 | # Download COCO trained weights from Releases if needed 38 | if not os.path.exists(COCO_MODEL_PATH): 39 | utils.download_trained_weights(COCO_MODEL_PATH) 40 | 41 | # Directory of images to run detection on 42 | IMAGE_DIR = os.path.join(ROOT_DIR, "images") 43 | 44 | 45 | # Start by reading file : no need to continue if input are incorrect 46 | if len(sys.argv) > 1: 47 | print("# Load image from url in command line") 48 | #input_file = "/iexec/input.img" 49 | #urllib.request.urlretrieve(str(sys.argv[1]), input_file) 50 | input_file, headers = urllib.request.urlretrieve(str(sys.argv[1])) 51 | else: 52 | print("# ERROR : You need to provide an image URL as a parameter !") 53 | quit(-1) 54 | 55 | image = skimage.io.imread(os.path.join(IMAGE_DIR, input_file)) 56 | 57 | 58 | class InferenceConfig(coco.CocoConfig): 59 | # Set batch size to 1 since we'll be running inference on 60 | # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU 61 | GPU_COUNT = 1 62 | IMAGES_PER_GPU = 1 63 | 64 | config = InferenceConfig() 65 | #config.display() 66 | 67 | print("# Create model object in inference mode.") 68 | model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) 69 | 70 | print("# Load weights trained on MS-COCO") 71 | model.load_weights(COCO_MODEL_PATH, by_name=True) 72 | 73 | # COCO Class names 74 | # Index of the class in the list is its ID. For example, to get ID of 75 | # the teddy bear class, use: class_names.index('teddy bear') 76 | class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 77 | 'bus', 'train', 'truck', 'boat', 'traffic light', 78 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 79 | 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 80 | 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 81 | 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 82 | 'kite', 'baseball bat', 'baseball glove', 'skateboard', 83 | 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 84 | 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 85 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 86 | 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 87 | 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 88 | 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 89 | 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 90 | 'teddy bear', 'hair drier', 'toothbrush'] 91 | 92 | print("# Run detection") 93 | results = model.detect([image], verbose=1) 94 | 95 | print("Visualize results") 96 | r = results[0] 97 | 98 | visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], 99 | class_names, r['scores']) 100 | output_file = "{}/{}.png".format("/iexec", datetime.now().strftime('%Y%m%d_%H%M%S')) 101 | print("Save results to ", output_file) 102 | plt.savefig(output_file, bbox_inches='tight', pad_inches=0) 103 | print('# Save /iexec/consensus.iexec for the PoCo verification') 104 | consensus_file = "/iexec/consensus.iexec" 105 | boxes = r['rois'] 106 | class_ids = r['class_ids'] 107 | scores = r['scores'] 108 | consensus = "" 109 | N = r['rois'].shape[0] 110 | for i in range(N): 111 | # Bounding box 112 | if not np.any(boxes[i]): 113 | # Skip this instance. Has no bbox 114 | continue 115 | y1, x1, y2, x2 = boxes[i] 116 | rectangle = " Box = " + str(x1) +"x"+ str(y1) + " " + str(x2) + "x" + str(y2) 117 | 118 | # Label 119 | class_id = class_ids[i] 120 | score = scores[i] if scores is not None else None 121 | label = class_names[class_id] 122 | x = random.randint(x1, (x1 + x2) // 2) 123 | caption = "{} {:.3f}".format(label, score) if score else label 124 | consensus += caption + rectangle + "\r\n" 125 | #print(consensus) 126 | with open(consensus_file, 'w') as fp: 127 | fp.write(consensus) 128 | fp.close() 129 | -------------------------------------------------------------------------------- /docker_keras_cpu/mnist_cnn.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | '''Trains a simple convnet on the MNIST dataset. 3 | 4 | Gets to 99.25% test accuracy after 12 epochs 5 | (there is still a lot of margin for parameter tuning). 6 | 16 seconds per epoch on a GRID K520 GPU. 7 | ''' 8 | 9 | from __future__ import print_function 10 | import keras 11 | from keras.datasets import mnist 12 | from keras.models import Sequential 13 | from keras.layers import Dense, Dropout, Flatten 14 | from keras.layers import Conv2D, MaxPooling2D 15 | from keras import backend as K 16 | 17 | batch_size = 128 18 | num_classes = 10 19 | epochs = 1 # 12 20 | 21 | # input image dimensions 22 | img_rows, img_cols = 28, 28 23 | 24 | # the data, split between train and test sets 25 | (x_train, y_train), (x_test, y_test) = mnist.load_data() 26 | 27 | if K.image_data_format() == 'channels_first': 28 | x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) 29 | x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) 30 | input_shape = (1, img_rows, img_cols) 31 | else: 32 | x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) 33 | x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) 34 | input_shape = (img_rows, img_cols, 1) 35 | 36 | x_train = x_train.astype('float32') 37 | x_test = x_test.astype('float32') 38 | x_train /= 255 39 | x_test /= 255 40 | print('x_train shape:', x_train.shape) 41 | print(x_train.shape[0], 'train samples') 42 | print(x_test.shape[0], 'test samples') 43 | 44 | # convert class vectors to binary class matrices 45 | y_train = keras.utils.to_categorical(y_train, num_classes) 46 | y_test = keras.utils.to_categorical(y_test, num_classes) 47 | 48 | model = Sequential() 49 | model.add(Conv2D(32, kernel_size=(3, 3), 50 | activation='relu', 51 | input_shape=input_shape)) 52 | model.add(Conv2D(64, (3, 3), activation='relu')) 53 | model.add(MaxPooling2D(pool_size=(2, 2))) 54 | model.add(Dropout(0.25)) 55 | model.add(Flatten()) 56 | model.add(Dense(128, activation='relu')) 57 | model.add(Dropout(0.5)) 58 | model.add(Dense(num_classes, activation='softmax')) 59 | 60 | model.compile(loss=keras.losses.categorical_crossentropy, 61 | optimizer=keras.optimizers.Adadelta(), 62 | metrics=['accuracy']) 63 | 64 | model.fit(x_train, y_train, 65 | batch_size=batch_size, 66 | epochs=epochs, 67 | verbose=1, 68 | validation_data=(x_test, y_test)) 69 | score = model.evaluate(x_test, y_test, verbose=0) 70 | print('Test loss:', score[0]) 71 | print('Test accuracy:', score[1]) 72 | -------------------------------------------------------------------------------- /frontend/.babelrc: -------------------------------------------------------------------------------- 1 | { 2 | "plugins": [ 3 | ["transform-runtime", { 4 | "polyfill": false, 5 | "regenerator": true 6 | }] 7 | ] 8 | } 9 | -------------------------------------------------------------------------------- /frontend/README.md: -------------------------------------------------------------------------------- 1 | # decentralized_ai 2 | 3 | > A Vue.js project 4 | 5 | ## Build Setup 6 | 7 | ``` bash 8 | # install dependencies 9 | npm install 10 | 11 | # serve with hot reload at localhost:8080 12 | npm run dev 13 | 14 | # build for production with minification 15 | npm run build 16 | ``` 17 | 18 | For detailed explanation on how things work, consult the [docs for vue-loader](http://vuejs.github.io/vue-loader). 19 | 20 | 21 | ## IPFS 22 | 23 | First install [IPFS](https://ipfs.io/docs/install/) 24 | 25 | ``` 26 | ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/9001 27 | ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001 28 | ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "GET", "POST", "OPTIONS"]' 29 | ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["*"]' 30 | ipfs daemon 31 | ``` -------------------------------------------------------------------------------- /frontend/docs/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Welcome to Decentralized AI 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | -------------------------------------------------------------------------------- /frontend/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Welcome to Decentralized AI 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | -------------------------------------------------------------------------------- /frontend/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "decentralized_ai", 3 | "description": "A Vue.js project", 4 | "version": "1.0.0", 5 | "author": "Alexandre Moreau ", 6 | "homepage": "https://github.com/alemoreau/decentralized_AI", 7 | "private": true, 8 | "scripts": { 9 | "dev": "cross-env NODE_ENV=development webpack-dev-server --open --inline --hot", 10 | "build": "cross-env NODE_ENV=production webpack --progress --hide-modules", 11 | "deploy": "node ./node_modules/vue-gh-pages/index.js" 12 | }, 13 | "dependencies": { 14 | "ethjs": "^0.4.0", 15 | "iexec-contracts-js-client": "^2.0.8", 16 | "iexec-poco-v2": "^1.0.18", 17 | "iexec-server-js-client": "^1.3.2", 18 | "ipfs-api": "^22.0.1", 19 | "readline": "^1.3.0", 20 | "vue": "^2.5.3", 21 | "vue-async-computed": "^3.3.1", 22 | "vue-picture-input": "^2.1.6", 23 | "vue-resource": "^1.5.1", 24 | "vuetify": "^1.0.0", 25 | "web3": "^1.0.0-beta.34" 26 | }, 27 | "devDependencies": { 28 | "babel-core": "^6.26.3", 29 | "babel-loader": "^7.1.4", 30 | "babel-plugin-transform-runtime": "^6.23.0", 31 | "babel-polyfill": "^6.26.0", 32 | "babel-preset-env": "^1.6.0", 33 | "babel-preset-es2015": "^6.24.1", 34 | "babel-preset-stage-0": "^6.24.1", 35 | "babel-preset-stage-2": "^6.24.1", 36 | "cross-env": "^5.0.5", 37 | "css-loader": "^0.28.7", 38 | "file-loader": "^1.1.4", 39 | "push-dir": "^0.4.1", 40 | "style-loader": "^0.13.1", 41 | "stylus": "^0.54.5", 42 | "stylus-loader": "^3.0.1", 43 | "uglifyjs-webpack-plugin": "^1.2.5", 44 | "vue-gh-pages": "^1.16.0", 45 | "vue-loader": "^13.0.5", 46 | "vue-template-compiler": "^2.5.3", 47 | "webpack": "^3.6.0", 48 | "webpack-dev-server": "^2.9.1" 49 | }, 50 | "wst": "/dist/" 51 | } 52 | -------------------------------------------------------------------------------- /frontend/public/favicon-32x32.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/frontend/public/favicon-32x32.png -------------------------------------------------------------------------------- /frontend/public/v.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/frontend/public/v.png -------------------------------------------------------------------------------- /frontend/src/App.vue: -------------------------------------------------------------------------------- 1 | 135 | 136 | 274 | -------------------------------------------------------------------------------- /frontend/src/chains.js: -------------------------------------------------------------------------------- 1 | 2 | export const DEFAULT_CHAIN = 1337; 3 | 4 | export const chains = { 5 | dev: { 6 | id: 1337, 7 | hub: '0x215a47b255f8636d8b44a807ae1dddbca7be10a7', 8 | server: 'https://mainxw.iex.ec:443', 9 | name: 'dev', 10 | }, 11 | mainnet: { 12 | server: 'https://mainxw.iex.ec:443', 13 | name: 'mainnet', 14 | id: 1, 15 | }, 16 | ropsten: { 17 | server: 'https://testxw.iex.ec:443', 18 | name: 'ropsten', 19 | id: 3, 20 | }, 21 | rinkeby: { 22 | server: 'https://testxw.iex.ec:443', 23 | name: 'ropsten', 24 | id: 4, 25 | }, 26 | kovan: { 27 | server: 'https://testxw.iex.ec:443', 28 | name: 'kovan', 29 | id: 42, 30 | hub: '0x12b92a17b1ca4bb10b861386446b8b2716e58c9b', 31 | api: '0x58cde9db0d95c8b6122d72a90c0b10a6e01adf6d', 32 | }, 33 | }; 34 | chains['1'] = chains.mainnet; 35 | chains['3'] = chains.ropsten; 36 | chains['4'] = chains.rinkeby; 37 | chains['42'] = chains.kovan; 38 | chains['1337'] = chains.dev; 39 | 40 | export const chainsMap = { 41 | mainnet: '1', 42 | 1: 'mainnet', 43 | ropsten: '3', 44 | 3: 'ropsten', 45 | rinkeby: '4', 46 | 4: 'rinkeby', 47 | kovan: '42', 48 | 42: 'kovan', 49 | dev: '1137', 50 | 1337: 'dev', 51 | }; 52 | 53 | 54 | export const chainNames = ['mainnet', 'ropsten', 'rinkeby', 'kovan', 'dev']; 55 | export const chainIDs = ['1', '3', '4', '42', '1337']; 56 | -------------------------------------------------------------------------------- /frontend/src/components/Category.vue: -------------------------------------------------------------------------------- 1 | 46 | 47 | 81 | -------------------------------------------------------------------------------- /frontend/src/components/Orders.vue: -------------------------------------------------------------------------------- 1 | 8 | 9 | 123 | -------------------------------------------------------------------------------- /frontend/src/components/Work.vue: -------------------------------------------------------------------------------- 1 | 18 | 19 | 76 | -------------------------------------------------------------------------------- /frontend/src/components/Works.vue: -------------------------------------------------------------------------------- 1 | 23 | 24 | 59 | -------------------------------------------------------------------------------- /frontend/src/main.js: -------------------------------------------------------------------------------- 1 | import Vue from 'vue' 2 | import App from './App.vue' 3 | import Vuetify from 'vuetify' 4 | import EthJs from 'ethjs' 5 | import createIExecContracts from 'iexec-contracts-js-client' 6 | import { chains, DEFAULT_CHAIN } from './chains' 7 | import AsyncComputed from 'vue-async-computed' 8 | import IpfsApi from 'ipfs-api' 9 | import VueResource from 'vue-resource' 10 | 11 | import 'vuetify/dist/vuetify.css' 12 | 13 | Vue.use(Vuetify) 14 | Vue.use(AsyncComputed) 15 | Vue.use(VueResource); 16 | 17 | const sleep = ms => new Promise(r => setTimeout(r, ms)); 18 | const debug = console.log; 19 | 20 | const ipfs = IpfsApi('nrxubuntu.eastus2.cloudapp.azure.com', '7001') 21 | // const ipfs = IpfsApi('localhost', '5001') 22 | // const ipfs = IpfsApi({ host: 'ipfs.infura.io', port: 5001, protocol: 'https' }) 23 | 24 | let Global = new Vue({ 25 | data: { 26 | $ipfs: ipfs, 27 | $ethjs: null, 28 | $account: null, 29 | $chainId: DEFAULT_CHAIN 30 | }, 31 | }); 32 | 33 | Vue.mixin({ 34 | computed: { 35 | $ipfs: { 36 | get: () => Global.$data.$ipfs 37 | }, 38 | $account: { 39 | get: () => { return Global.$data.$account }, 40 | set: (account) => { Global.$data.$account = account } 41 | }, 42 | $chainId: { 43 | get: () => { return Global.$data.$chainId }, 44 | set: (chainId) => { Global.$data.$chainId = chainId } 45 | }, 46 | $ethjs: { 47 | get: () => { return Global.$data.$ethjs }, 48 | set: (ethjs) => { Global.$data.$ethjs = ethjs } 49 | }, 50 | }, 51 | }) 52 | 53 | const setAccount = (account) => Global.$data.$account = account 54 | const setChainID = (chainId) => Global.$data.$chainId = chainId 55 | 56 | new Vue({ 57 | el: '#app', 58 | async mounted () { 59 | this.$ipfs.swarm.peers((err, res) => { 60 | if (err) { 61 | console.error("IPFS ERROR :" + err) 62 | } else { 63 | console.log("IPFS - connected.") 64 | } 65 | }) 66 | 67 | this.$ipfs.id((err, res) => { 68 | if (err) throw err 69 | debug('IPFS id :', res.id) 70 | debug('IPFS agentVersion :', res.agentVersion) 71 | debug('IPFS protocolVersion :', res.protocolVersion) 72 | }) 73 | 74 | if (typeof window.web3 !== 'undefined') { 75 | this.$ethjs = new EthJs(window.web3.currentProvider); 76 | debug('ethjs', this.$ethjs); 77 | this.$ethjs 78 | .protocolVersion() 79 | .then(result => debug('ethjs.protocolVersion', result)) 80 | .catch(debug); 81 | this.$ethjs 82 | .net_version() 83 | .then(result => debug('ethjs.net_version', result)) 84 | .catch(debug); 85 | this.$ethjs 86 | .web3_clientVersion() 87 | .then(result => debug('ethjs.web3_clientVersion', result)) 88 | .catch(debug); 89 | this.$ethjs 90 | .accounts() 91 | .then(accounts => { 92 | debug('ethjs.accounts', accounts); 93 | }) 94 | .catch(debug); 95 | 96 | let currChainID = await this.$ethjs.net_version(); 97 | debug('currChainID', currChainID); 98 | if (currChainID && currChainID !== DEFAULT_CHAIN) setChainID(currChainID); 99 | const checkChainID = async () => { 100 | const newChainID = await this.$ethjs.net_version(); 101 | if (newChainID !== currChainID) { 102 | setChainID(newChainID); 103 | currChainID = newChainID; 104 | } 105 | await sleep(200); 106 | checkChainID(); 107 | }; 108 | checkChainID(); 109 | 110 | let [currAccount] = await this.$ethjs.accounts(); 111 | debug('currAccount', currAccount); 112 | if (currAccount) setAccount(currAccount); 113 | const checkAccount = async () => { 114 | const [newAccount] = await this.$ethjs.accounts(); 115 | if (newAccount !== currAccount) { 116 | debug('Metamask change account', newAccount); 117 | setAccount(newAccount); 118 | currAccount = newAccount; 119 | } 120 | await sleep(200); 121 | checkAccount(); 122 | }; 123 | checkAccount(); 124 | } else { 125 | debug('No web3, need to install metamask'); 126 | } 127 | }, 128 | render: h => h(App) 129 | }) 130 | -------------------------------------------------------------------------------- /frontend/src/utils/extensions.js: -------------------------------------------------------------------------------- 1 | // credit to : https://github.com/coldice/dbh-b9lab-hackathon/blob/development/truffle/utils/extensions.js 2 | var _ = require("lodash"); 3 | module.exports = { 4 | 5 | init: function(web3, assert) { 6 | // From https://gist.github.com/xavierlepretre/88682e871f4ad07be4534ae560692ee6 7 | web3.eth.getTransactionReceiptMined = function(txnHash, interval) { 8 | var transactionReceiptAsync; 9 | interval = interval ? interval : 500; 10 | transactionReceiptAsync = function(txnHash, resolve, reject) { 11 | try { 12 | web3.eth.getTransactionReceiptPromise(txnHash) 13 | .then(function(receipt) { 14 | if (receipt == null) { 15 | setTimeout(function() { 16 | transactionReceiptAsync(txnHash, resolve, reject); 17 | }, interval); 18 | } else { 19 | resolve(receipt); 20 | } 21 | }) 22 | .catch(function(e) { 23 | if ((e + "").indexOf("Error: unknown transaction") > -1) {//new error to catch with geth 1.8.1 24 | setTimeout(function() { 25 | transactionReceiptAsync(txnHash, resolve, reject); 26 | }, interval); 27 | } 28 | else { 29 | throw e; 30 | } 31 | }); 32 | } catch (e) { 33 | reject(e); 34 | } 35 | }; 36 | 37 | if (Array.isArray(txnHash)) { 38 | var promises = []; 39 | txnHash.forEach(function(oneTxHash) { 40 | promises.push(web3.eth.getTransactionReceiptMined(oneTxHash, interval)); 41 | }); 42 | return Promise.all(promises); 43 | } else { 44 | return new Promise(function(resolve, reject) { 45 | transactionReceiptAsync(txnHash, resolve, reject); 46 | }); 47 | } 48 | }; 49 | 50 | assert.isTxHash = function(txnHash, message) { 51 | assert(typeof txnHash === "string", 52 | 'expected #{txnHash} to be a string', 53 | 'expected #{txnHash} to not be a string'); 54 | assert(txnHash.length === 66, 55 | 'expected #{txnHash} to be a 66 character transaction hash (0x...)', 56 | 'expected #{txnHash} to not be a 66 character transaction hash (0x...)'); 57 | 58 | // Convert txnHash to a number. Make sure it's not zero. 59 | // Controversial: Technically there is that edge case where 60 | // all zeroes could be a valid address. But: This catches all 61 | // those cases where Ethereum returns 0x0000... if something fails. 62 | var number = web3.toBigNumber(txnHash, 16); 63 | assert(number.equals(0) === false, 64 | 'expected address #{txnHash} to not be zero', 65 | 'you shouldn\'t ever see this.'); 66 | }; 67 | }, 68 | assertEvent: function(contract, filter) { 69 | return new Promise((resolve, reject) => { 70 | var event = contract[filter.event](); 71 | event.watch(); 72 | event.get((error, logs) => { 73 | var log = _.filter(logs, filter); 74 | if (log && log.length > 0) { 75 | resolve(log); 76 | } else { 77 | throw Error("Failed to find filtered event for " + filter.event); 78 | } 79 | }); 80 | event.stopWatching(); 81 | }); 82 | }, 83 | // From https://gist.github.com/xavierlepretre/afab5a6ca65e0c52eaf902b50b807401 84 | getEventsPromise: function(myFilter, count, timeOut) { 85 | timeOut = timeOut ? timeOut : 100000; 86 | var promise = new Promise(function(resolve, reject) { 87 | count = (typeof count !== "undefined") ? count : 1; 88 | var results = []; 89 | var toClear = setTimeout(function() { 90 | myFilter.stopWatching(); 91 | reject(new Error("Timed out")); 92 | }, timeOut); 93 | myFilter.watch(function(error, result) { 94 | if (error) { 95 | clearTimeout(toClear); 96 | reject(error); 97 | } else { 98 | count--; 99 | results.push(result); 100 | } 101 | if (count <= 0) { 102 | clearTimeout(toClear); 103 | myFilter.stopWatching(); 104 | resolve(results); 105 | } 106 | }); 107 | }); 108 | if (count == 0) { 109 | promise = promise 110 | .then(function(events) { 111 | throw "Expected to have no event"; 112 | }) 113 | .catch(function(error) { 114 | if (error.message != "Timed out") { 115 | throw error; 116 | } 117 | }); 118 | } 119 | return promise; 120 | }, 121 | 122 | // From https://gist.github.com/xavierlepretre/d5583222fde52ddfbc58b7cfa0d2d0a9 123 | expectedExceptionPromise: function(action, gasToUse, timeOut) { 124 | timeOut = timeOut ? timeOut : 5000; 125 | var promise = new Promise(function(resolve, reject) { 126 | try { 127 | resolve(action()); 128 | } catch (e) { 129 | reject(e); 130 | } 131 | }) 132 | .then(function(txObject) { 133 | // We are in Geth 134 | console.log("Geth mode : run out of gas. throw expected ?"); 135 | assert.equal(txObject.receipt.gasUsed, gasToUse, "should have used all the gas"); 136 | }) 137 | .catch(function(e) { 138 | if ((e + "").indexOf("invalid JUMP") > -1 || (e + "").indexOf("out of gas") > -1) { 139 | // We are in TestRPC 140 | console.log("TestRPC mode : invalid JUMP or out of gas detected") 141 | } else if ((e + "").indexOf("please check your gas amount") > -1) { 142 | // We are in Geth for a deployment 143 | console.log("Geth mode : deployment in progress ...") 144 | } else if ((e + "").indexOf("Error: VM Exception while processing transaction: invalid opcode") > -1) { 145 | //see issue : https://github.com/trufflesuite/truffle/issues/498 146 | console.log("TestRPC mode : Error: VM Exception. throw expected ?"); 147 | } else if ((e + "").indexOf("Error: VM Exception while processing transaction: revert") > -1) { 148 | console.log("TestRPC mode : Error: VM Exception while processing transaction: revert"); 149 | } else { 150 | throw e; 151 | } 152 | }); 153 | 154 | return promise; 155 | }, 156 | waitPromise: function(timeOut, toPassOn) { 157 | timeOut = timeOut ? timeOut : 1000; 158 | return new Promise(function(resolve, reject) { 159 | setTimeout(function() { 160 | resolve(toPassOn); 161 | }, timeOut); 162 | }); 163 | }, 164 | signMessage: function(author, text) { 165 | /* 166 | * Sign a string and return (hash, v, r, s) used by ecrecover to regenerate the coinbase address; 167 | */ 168 | let sha = web3.sha3(text); 169 | return web3.eth.signPromise(author, sha) 170 | .then(signPromise => { 171 | 172 | sig = signPromise.substr(2, signPromise.length); 173 | let r = '0x' + sig.substr(0, 64); 174 | let s = '0x' + sig.substr(64, 64); 175 | let v = web3.toDecimal(sig.substr(128, 2)) + 27; 176 | return { 177 | sha, 178 | v, 179 | r, 180 | s 181 | }; 182 | }); 183 | }, 184 | makeSureAreUnlocked: function(accounts) { 185 | var requests = []; 186 | accounts.forEach(function(account, index) { 187 | requests.push(web3.eth.signPromise( 188 | account, 189 | "0x0000000000000000000000000000000000000000000000000000000000000000") 190 | .catch(function(error) { 191 | if (error.message == "account is locked") { 192 | throw Error("account " + account + " at index " + index + " is locked"); 193 | } else { 194 | throw error; 195 | } 196 | })); 197 | }); 198 | return Promise.all(requests); 199 | }, 200 | gasTxUsedCost: function(txMined) { 201 | var gasUsed = txMined.receipt.gasUsed; 202 | return web3.eth.getTransactionPromise(txMined.tx) 203 | .then(tx => (tx.gasPrice * gasUsed)); 204 | }, 205 | getCurrentBlockTime: function() { 206 | return web3.eth.getBlockNumberPromise() 207 | .then(blockNumber => web3.eth.getBlockPromise(blockNumber)) 208 | .then(block => block.timestamp); 209 | }, 210 | getCurrentBlockNumber: function() { 211 | return web3.eth.getBlockNumberPromise() 212 | .then(blockNumber => blockNumber); 213 | }, 214 | sleep: function(time) { 215 | console.log("wait " + time + " burning cpu") 216 | var d1 = new Date(); 217 | var d2 = new Date(); 218 | while (d2.valueOf() < d1.valueOf() + time) { 219 | d2 = new Date(); 220 | } 221 | }, 222 | iexecOperationId: function(user,provider,id) { 223 | let providerWithout0x = web3.toHex(provider).replace('0x', ''); 224 | let idWithout0x = web3.toHex(id).replace('0x', ''); 225 | return web3.sha3(user + providerWithout0x + idWithout0x, { 226 | encoding: 'hex' 227 | }); 228 | }, 229 | refillAccount: function(giver, receiver, minimalEtherNeeded) { 230 | return web3.eth.getBalancePromise(receiver) 231 | .then(balance => { 232 | if (balance.lessThan(web3.toWei(web3.toBigNumber(minimalEtherNeeded), "ether"))) { 233 | return web3.eth.sendTransactionPromise({ 234 | from: giver, 235 | to: receiver, 236 | value: web3.toWei(web3.toBigNumber(minimalEtherNeeded), "ether") 237 | }); 238 | } 239 | }) 240 | .then(txSent => { 241 | if (txSent) { 242 | return web3.eth.getTransactionReceiptMined(txSent); 243 | } 244 | }) 245 | .then(txMined => { 246 | if (txMined) { 247 | console.log("refill " + web3.toBigNumber(minimalEtherNeeded) + " ether to " + receiver); 248 | } 249 | }); 250 | }, 251 | isAddress: function(address) { 252 | if (!/^(0x)?[0-9a-f]{40}$/i.test(address)) { 253 | // check if it has the basic requirements of an address 254 | return false; 255 | } else if (/^(0x)?[0-9a-f]{40}$/.test(address) || /^(0x)?[0-9A-F]{40}$/.test(address)) { 256 | // If it's all small caps or all all caps, return true 257 | return true; 258 | } else { 259 | // Otherwise check each case 260 | address = address.replace('0x', ''); 261 | var addressHash = web3.sha3(address.toLowerCase()); 262 | 263 | for (var i = 0; i < 40; i++) { 264 | // the nth letter should be uppercase if the nth digit of casemap is 1 265 | if ((parseInt(addressHash[i], 16) > 8 && address[i].toUpperCase() != address[i]) || (parseInt(addressHash[i], 16) <= 8 && address[i].toLowerCase() != address[i])) { 266 | return false; 267 | } 268 | } 269 | return true; 270 | } 271 | }, 272 | hashByteResult: function(byteresult) 273 | { 274 | const resultHash = web3.sha3(byteresult, {encoding: 'hex'}); // Vote 275 | return resultHash; 276 | }, 277 | signByteResult: function(byteresult, address) 278 | { 279 | const resultHash = web3.sha3(byteresult, {encoding: 'hex'}); // Vote 280 | const addressHash = web3.sha3(address, {encoding: 'hex'}); 281 | var xor = '0x'; 282 | for(i=2; i<66; ++i) xor += (parseInt(byteresult.charAt(i), 16) ^ parseInt(addressHash.charAt(i), 16)).toString(16); // length 64, with starting 0x 283 | const sign = web3.sha3(xor, {encoding: 'hex'}); // Sign 284 | return {hash: resultHash, sign: sign}; 285 | }, 286 | hashResult: function(result) { return this.hashByteResult(web3.sha3(result) ); }, 287 | signResult: function(result, address) { return this.signByteResult(web3.sha3(result), address); }, 288 | 289 | }; 290 | -------------------------------------------------------------------------------- /frontend/webpack.config.js: -------------------------------------------------------------------------------- 1 | var path = require('path') 2 | var webpack = require('webpack') 3 | const UglifyJs = require('uglifyjs-webpack-plugin') 4 | 5 | module.exports = { 6 | entry: './src/main.js', 7 | output: { 8 | path: path.resolve(__dirname, './dist'), 9 | publicPath: '', 10 | filename: 'build.js' 11 | }, 12 | resolve: { 13 | extensions: ['.js', '.vue'], 14 | alias: { 15 | 'vue$': 'vue/dist/vue.esm.js', 16 | 'public': path.resolve(__dirname, './public') 17 | } 18 | }, 19 | module: { 20 | rules: [ 21 | { 22 | test: /\.vue$/, 23 | loader: 'vue-loader', 24 | options: { 25 | loaders: { 26 | } 27 | // other vue-loader options go here 28 | } 29 | }, 30 | { 31 | test: /\.js$/, 32 | loader: 'babel-loader', 33 | exclude: /node_modules/ 34 | }, 35 | { 36 | test: /\.(png|jpg|gif|svg)$/, 37 | loader: 'file-loader', 38 | options: { 39 | objectAssign: 'Object.assign' 40 | } 41 | }, 42 | { 43 | test: /\.css$/, 44 | loader: ['style-loader', 'css-loader'] 45 | }, 46 | { 47 | test: /\.styl$/, 48 | loader: ['style-loader', 'css-loader', 'stylus-loader'] 49 | } 50 | ] 51 | }, 52 | devServer: { 53 | historyApiFallback: true, 54 | disableHostCheck: true, 55 | noInfo: true 56 | }, 57 | performance: { 58 | hints: false 59 | }, 60 | devtool: '#eval-source-map' 61 | } 62 | 63 | if (process.env.NODE_ENV === 'production') { 64 | module.exports.devtool = '#source-map' 65 | // http://vue-loader.vuejs.org/en/workflow/production.html 66 | module.exports.plugins = (module.exports.plugins || []).concat([ 67 | new webpack.DefinePlugin({ 68 | 'process.env': { 69 | NODE_ENV: '"production"' 70 | } 71 | }), 72 | new UglifyJs(), 73 | new webpack.LoaderOptionsPlugin({ 74 | minimize: true 75 | }) 76 | ]) 77 | } 78 | -------------------------------------------------------------------------------- /iexec.json: -------------------------------------------------------------------------------- 1 | { 2 | "description": "Semantic segmentation with Keras+Tensorflow", 3 | "license": "GPL v2", 4 | "author": "Benoit Courty", 5 | "social": { 6 | "website": "https://github.com/trancept/decentralized_AI", 7 | "github": "https://github.com/trancept/decentralized_AI" 8 | }, 9 | "logo": "logo.png", 10 | "app": { 11 | "name": "Keras", 12 | "price": 1, 13 | "params": { 14 | "type": "DOCKER", 15 | "envvars": "XWDOCKERIMAGE=trancept/keras_mrcnn:v0" 16 | } 17 | }, 18 | "order": { 19 | "buy": { 20 | "app": "0xc790d024ec41a7649e7a0590e4ae05891fa61ef8", 21 | "dataset": "0x0000000000000000000000000000000000000000", 22 | "params": { 23 | "cmdline": "http://www.met.grandlyon.com/wp-content/uploads/2016/11/Presquile-Rue-Victor-Hugo-6249.jpg" 24 | } 25 | } 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /img/20180604_143926.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/20180604_143926.png -------------------------------------------------------------------------------- /img/Alex-rd.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/Alex-rd.png -------------------------------------------------------------------------------- /img/Jeddi-rd168.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/Jeddi-rd168.png -------------------------------------------------------------------------------- /img/Mattew-rd168.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/Mattew-rd168.png -------------------------------------------------------------------------------- /img/Screenshot_from_2018-06-09_14-55-45.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/Screenshot_from_2018-06-09_14-55-45.png -------------------------------------------------------------------------------- /img/architecture_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/architecture_1.png -------------------------------------------------------------------------------- /img/architecture_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/architecture_2.png -------------------------------------------------------------------------------- /img/ben-rd168.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/ben-rd168.png -------------------------------------------------------------------------------- /img/front-work.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/front-work.jpg -------------------------------------------------------------------------------- /img/front_ai2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/front_ai2.jpg -------------------------------------------------------------------------------- /img/front_preview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/front_preview.png -------------------------------------------------------------------------------- /img/iexec-team-MRCNN.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/iexec-team-MRCNN.png -------------------------------------------------------------------------------- /img/iexec-team.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/img/iexec-team.jpeg -------------------------------------------------------------------------------- /openmined/README.md: -------------------------------------------------------------------------------- 1 | # Distributed_AI 2 | 3 | 4 | Repository for final project for Siraj's Distributed Apps course. 5 | 6 | ## Overview 7 | 8 | This notebook is an overview of the steps taken to implement OpenMined's Grid, the network that OpenMined implements to do distributed computation. 9 | 10 | 11 | This is combined with OpenMined's demonstration of distributed tensor processing with torch, as well as how to run epochs on the Boston Housing Dataset. 12 | 13 | This is not the full project, but is rather the most useful part of a project idea that was otherwise abandoned. 14 | 15 | 16 | ## TODO 17 | 18 | - Integration with the rest of team-rocket's DApp 19 | - Connection to a more reliable IPFS node (more reliable than a spare laptop) 20 | - conversion to standalone python code instead of -------------------------------------------------------------------------------- /openmined/Running_Grid.py: -------------------------------------------------------------------------------- 1 | 2 | # coding: utf-8 3 | 4 | # # Distributed model training with PyTorch's `torch.autograd` on Grid 5 | # 6 | # #### For Siraj's Distributed Apps Course 7 | # 8 | 9 | # In[1]: 10 | 11 | 12 | import torch # THis line should NOT be run after instantiating TorchClient 13 | import torch.nn as nn 14 | from torch.autograd import Variable 15 | from grid.clients.torch import TorchClient 16 | import numpy as np 17 | import re 18 | 19 | # instantiate client 20 | client = TorchClient(verbose=False) 21 | 22 | 23 | # ## First Steps 24 | 25 | # The first step is to find out which nodes are connected to the Grid network. After this we can choose who to run the computations with. 26 | 27 | # In[2]: 28 | 29 | 30 | client.print_network_stats() 31 | 32 | 33 | # In[3]: 34 | 35 | 36 | compute_nodes = [x for x in client if re.match('compute:', x)] 37 | assert len(compute_nodes) > 0 38 | 39 | print(compute_nodes) 40 | 41 | 42 | # In[4]: 43 | 44 | 45 | compute_nodes = compute_nodes[::-1] 46 | 47 | 48 | # In[5]: 49 | 50 | 51 | compute_nodes 52 | 53 | 54 | # In[6]: 55 | 56 | 57 | laptop = compute_nodes[0] 58 | 59 | 60 | # ### Remote Tensor Ops 61 | 62 | # In[7]: 63 | 64 | 65 | x = torch.FloatTensor([[1,1],[2,2]]) 66 | x.send_(laptop) 67 | y = x * x 68 | y.get_() 69 | 70 | 71 | # ### Beyond gradient-based models 72 | # 73 | # Tensor computation is sufficient for training most gradient-based models, but it's not convenient for doing so when backpropagation is involved. To solve this, we'll want to use automatic differentiation using the Variable class from `torch.autograd`. 74 | 75 | # Here are some useful Grid-specific attributes that are useful for these purposes. Since we want to use autograd, we'll do so with Variables instead of tensors, but all the usual Tensor types have these attributes too. 76 | 77 | # In[8]: 78 | 79 | 80 | # Note: Variable is now a purely internal class in PyTorch v0.4.0, 81 | # but Grid currently depends on v0.3.1. 82 | # This will be updated as soon as possible. 83 | 84 | x = Variable(torch.FloatTensor([[1,1],[2,2]]), requires_grad=True) 85 | y = Variable(torch.FloatTensor([[1,1],[2,2]]), requires_grad=True) 86 | 87 | print(x) 88 | 89 | print('======\nGrid-specific attributes\n======') 90 | print('owners: {}'.format(x.owners)) 91 | print('id: {}'.format(x.id)) 92 | print('is_pointer: {}'.format(x.is_pointer)) 93 | 94 | 95 | # #### Grid attributes: 96 | # - The `owners` attribute tells us where the tensor's data lives. 97 | # - The `id` attribute is a way for each machine's instance of Grid to track which Torch objects they're holding locally, allowing different machines to request access to different objects. This also allows each worker to know which tensors they need to perform computations on. 98 | # - The `is_pointer` attribute tells us whether or not the object we're referring to is local or remote. If it's local (is_pointer is False), we'll execute normal PyTorch code on it. Otherwise, we'll send our command to the owner machine and have it perform the computation we want over there. 99 | 100 | # Now we can send our Variables like we did with the first tensor. 101 | 102 | # In[9]: 103 | 104 | 105 | x.send_(laptop) 106 | y.send_(laptop) 107 | 108 | print(x) 109 | 110 | print('======\nGrid-specific attributes\n======') 111 | print('owners: {}'.format(x.owners)) 112 | print('id: {}'.format(x.id)) 113 | print('is_pointer: {}'.format(x.is_pointer)) 114 | 115 | 116 | # The location is different (we're now using a compute node with a different worker ID), and the `is_pointer` attribute changed to True. The data is no longer on this machine; it's now on the worker machine. 117 | 118 | # We'll demonstrate some remote computation below: 119 | 120 | # In[10]: 121 | 122 | 123 | z = x.matmul(y) 124 | 125 | 126 | # We can check out the result's attributes like we did for `x` and `y` above. 127 | 128 | # In[11]: 129 | 130 | 131 | print(z) 132 | 133 | print('======\nGrid-specific attributes\n======') 134 | print('owners: {}'.format(z.owners)) 135 | print('id: {}'.format(z.id)) 136 | print('is_pointer: {}'.format(z.is_pointer)) 137 | 138 | 139 | # #### Why is this notable? 140 | 141 | # We didn't have to change anything in our PyTorch code. The command `matmul` is identical to the normal PyTorch command, but the computation is being performed elsewhere. 142 | # 143 | # This example was also NOT cherry-picked; theoretically any method or function that inputs and outputs Tensor/Variable objects can be used in this exact same way, as long as those objects are stored somewhere on Grid and we have a local pointer to those objects. 144 | # 145 | # Although the computation result is on the other machine, we still have access to a local pointer for the Variable. We can use that pointer in future computations, chaining together commands for remote machines without having to retrieve and send the underlying data between each command. For example: 146 | 147 | # In[12]: 148 | 149 | 150 | z_sum = z.sum() 151 | 152 | 153 | # #### Checking derivatives 154 | 155 | # We used Variables so that we can take advantage of autograd. Let's make sure that works, by taking the derivative of `z_sum` with respect to `x` and `y`: 156 | 157 | # In[13]: 158 | 159 | 160 | z_sum.backward() 161 | 162 | 163 | # The derivatives with respect to `x` and `y` are now stored in `x.grad` and `y.grad`, but we'll need to retrieve `x` and `y` to access those. That's okay for this use case, since we're not concerned about data privacy right now. Figuring out how to call `get_` on the grad itself would be another useful contribution! 164 | 165 | # In[14]: 166 | 167 | 168 | x.get_() 169 | y.get_() 170 | 171 | print(x.grad) 172 | print(y.grad) 173 | 174 | 175 | # We've now computed derivatives on a remote machine, and we did so interactively with a dynamic computation graph over a peer-to-peer network! 176 | 177 | # # Training a model with distributed gradient descent 178 | 179 | # Currently, IPFS has some limitations around how much data can be transferred in one block. They've introduced a sharding mechanism to get around this ([see here](https://github.com/ipfs/go-ipfs/pull/3042)), but it's not currently being used in Grid. 180 | # 181 | # We'll use Data with relatively low dimensionality -- the [Boston housing prices dataset](https://www.kaggle.com/c/boston-housing/data). 182 | 183 | # In[15]: 184 | 185 | 186 | from keras.datasets import boston_housing 187 | 188 | 189 | # In[16]: 190 | 191 | 192 | (X, y), (X_test, y_test) = boston_housing.load_data() 193 | 194 | 195 | # In[17]: 196 | 197 | 198 | X = torch.from_numpy(X).type(torch.FloatTensor) 199 | y = torch.from_numpy(y).type(torch.FloatTensor) 200 | X_test = torch.from_numpy(X_test).type(torch.FloatTensor) 201 | y_test = torch.from_numpy(y_test).type(torch.FloatTensor) 202 | 203 | 204 | # In[18]: 205 | 206 | 207 | # preprocessing 208 | mean = X.mean(0, keepdim=True) 209 | dev = X.std(0, keepdim=True) 210 | mean[:, 3] = 0. # the feature at column 3 is binary, 211 | dev[:, 3] = 1. # so I'd rather not standardize it 212 | X = (X - mean) / dev 213 | X_test = (X_test - mean) / dev 214 | 215 | 216 | # #### Hyperparameters 217 | 218 | # In[20]: 219 | 220 | 221 | # training 222 | batch_size = 8 223 | learning_rate = .01 224 | epochs = 5 225 | update_master_every = 3 226 | 227 | # architecture 228 | input_shape = X.shape[1] 229 | first_neurons = 64 230 | second_neurons = 32 231 | try: # will work for multivariate regression tasks too 232 | dep_vars = y.size(1) 233 | except RuntimeError: 234 | dep_vars = 1 235 | 236 | 237 | # #### PyTorch utilities for supplying data to models 238 | 239 | # In[21]: 240 | 241 | 242 | from torch.utils.data import TensorDataset, DataLoader 243 | 244 | 245 | # In[22]: 246 | 247 | 248 | train = TensorDataset(X, y) 249 | test = TensorDataset(X_test, y_test) 250 | 251 | 252 | # In[23]: 253 | 254 | 255 | tr_load = DataLoader(train, batch_size = 8, drop_last=True) 256 | ts_load = DataLoader(test, batch_size = 8, drop_last=True) 257 | 258 | 259 | # #### Allocating training batches to each worker 260 | # 261 | # Once we send batches out to participating workers, they won't need to move around for the rest of training -- they'll only be sharing the model. Our client machine will play the role of Parameter Server. This is known as "data parallelism" (as opposed to "model parallelism"). 262 | 263 | # In[24]: 264 | 265 | 266 | get_ipython().run_cell_magic('time', '', '\nallocated = []\n\nfor (ix, (x_i, y_i)) in enumerate(tr_load):\n x_i = Variable(x_i, requires_grad = True)\n y_i = Variable(y_i, requires_grad = True)\n x_i.send_(compute_nodes[ix % len(compute_nodes)])\n y_i.send_(compute_nodes[ix % len(compute_nodes)])\n allocated.append((x_i, y_i))') 267 | 268 | 269 | # In[25]: 270 | 271 | 272 | len(allocated) 273 | 274 | 275 | # #### Setting up the model parameters. 276 | 277 | # In[27]: 278 | 279 | 280 | W_0 = nn.Parameter(torch.FloatTensor(input_shape, first_neurons)) 281 | W_1 = nn.Parameter(torch.FloatTensor(first_neurons, second_neurons)) 282 | W_2 = nn.Parameter(torch.FloatTensor(second_neurons, dep_vars)) 283 | 284 | # initialize properly 285 | relu_gain = nn.init.calculate_gain('relu') 286 | lin_gain = nn.init.calculate_gain('linear') 287 | 288 | nn.init.xavier_normal(W_0, gain=relu_gain) 289 | nn.init.xavier_normal(W_1, gain=relu_gain) 290 | nn.init.xavier_normal(W_2, gain=lin_gain) 291 | 292 | print('Network parameters initialized.') 293 | 294 | 295 | # Architecture helpers 296 | 297 | # In[28]: 298 | 299 | 300 | def relu(x): 301 | """Rectified linear activation""" 302 | return torch.clamp(x, min=0.) 303 | 304 | def linear(x, w): 305 | """Linear transformation of x by w""" 306 | return torch.matmul(x,w) 307 | 308 | def mse(y_hat, y_true): 309 | """Mean-squared error""" 310 | return torch.mean(torch.pow(y_hat - y_true, 2), dim=0, keepdim=True) 311 | 312 | 313 | # Gradient update helpers 314 | 315 | # In[29]: 316 | 317 | 318 | def average_grads(grads): 319 | """Average a sequence of gradients""" 320 | return torch.mean(torch.cat(grads)) 321 | 322 | def update_params(param, grad, alpha): 323 | """Update parameter tensor with standard mini-batch gradient descent""" 324 | return param - alpha * grad 325 | 326 | def reset_flags(param): 327 | """Resets flags for a Parameter that's experienced an in-place operation""" 328 | param.requires_grad = True 329 | param.volatile = False 330 | 331 | 332 | # #### Main training loop 333 | 334 | # In[30]: 335 | 336 | 337 | # Initialize gradient buffers 338 | W_0_grads = [] 339 | W_1_grads = [] 340 | W_2_grads = [] 341 | 342 | # Loop over epochs 343 | for epoch in range(epochs): 344 | 345 | # Loop over distributed batches 346 | for ix, (x_i, y_i) in enumerate(allocated): 347 | # Broadcast current weights to workers 348 | W_0_clones = [W_0.clone().send_(node) for node in compute_nodes] 349 | W_1_clones = [W_1.clone().send_(node) for node in compute_nodes] 350 | W_2_clones = [W_2.clone().send_(node) for node in compute_nodes] 351 | 352 | # Pull pointers from clone list 353 | W_0_tmp = W_0_clones[ix % len(compute_nodes)] 354 | W_1_tmp = W_1_clones[ix % len(compute_nodes)] 355 | W_2_tmp = W_2_clones[ix % len(compute_nodes)] 356 | 357 | # Forward pass 358 | act_0 = relu(linear(x_i, W_0_tmp)) 359 | act_1 = relu(linear(act_0, W_1_tmp)) 360 | y_hat = linear(act_1, W_2_tmp).view(-1) 361 | 362 | # Calculate MSE loss and perform backprop 363 | y_i = y_i.type_as(y_hat) # type-safety 364 | loss = mse(y_hat, y_i) 365 | loss.backward() 366 | 367 | # Recall parameters 368 | W_0_tmp.get_() 369 | W_1_tmp.get_() 370 | W_2_tmp.get_() 371 | 372 | # Store parameter grads 373 | W_0_grads.append(W_0_tmp.grad) 374 | W_1_grads.append(W_1_tmp.grad) 375 | W_2_grads.append(W_2_tmp.grad) 376 | 377 | # Update master parameters 378 | if ix % update_master_every == 0: 379 | W_0_grad = average_grads(W_0_grads) 380 | W_1_grad = average_grads(W_1_grads) 381 | W_2_grad = average_grads(W_2_grads) 382 | 383 | W_0 = update_params(W_0, W_0_grad, alpha=learning_rate) 384 | W_1 = update_params(W_1, W_1_grad, alpha=learning_rate) 385 | W_2 = update_params(W_2, W_2_grad, alpha=learning_rate) 386 | 387 | # We've overridden Variables in-place, which breaks the computation graph internally 388 | # This is intentional, but this needs to be cleaned up a bit to be able to keep going 389 | reset_flags(W_0) 390 | reset_flags(W_1) 391 | reset_flags(W_2) 392 | 393 | # Cleaning out parameter server grad buffers 394 | W_0_grads = [] 395 | W_1_grads = [] 396 | W_2_grads = [] 397 | 398 | 399 | print("Epoch {} done!".format(epoch + 1)) 400 | 401 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/.gitignore: -------------------------------------------------------------------------------- 1 | .idea/ 2 | logs/ 3 | *.pth 4 | *.so 5 | *.pyc 6 | *.o 7 | pycocotools 8 | *_ext* 9 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/LICENSE: -------------------------------------------------------------------------------- 1 | Mask R-CNN 2 | 3 | The MIT License (MIT) 4 | 5 | Copyright (c) 2017 Matterport, Inc. 6 | 7 | Permission is hereby granted, free of charge, to any person obtaining a copy 8 | of this software and associated documentation files (the "Software"), to deal 9 | in the Software without restriction, including without limitation the rights 10 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 11 | copies of the Software, and to permit persons to whom the Software is 12 | furnished to do so, subject to the following conditions: 13 | 14 | The above copyright notice and this permission notice shall be included in 15 | all copies or substantial portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 20 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 21 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 22 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 23 | THE SOFTWARE. 24 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/README.md: -------------------------------------------------------------------------------- 1 | # pytorch-mask-rcnn 2 | 3 | 4 | This is a Pytorch implementation of [Mask R-CNN](https://arxiv.org/abs/1703.06870) that is in large parts based on Matterport's 5 | [Mask_RCNN](https://github.com/matterport/Mask_RCNN). Matterport's repository is an implementation on Keras and TensorFlow. 6 | The following parts of the README are excerpts from the Matterport README. Details on the requirements, training on MS COCO 7 | and detection results for this repository can be found at the end of the document. 8 | 9 | The Mask R-CNN model generates bounding boxes and segmentation masks for each instance of an object in the image. It's based 10 | on Feature Pyramid Network (FPN) and a ResNet101 backbone. 11 | 12 | ![Instance Segmentation Sample](assets/street.png) 13 | 14 | The next four images visualize different stages in the detection pipeline: 15 | 16 | 17 | ##### 1. Anchor sorting and filtering 18 | The Region Proposal Network proposes bounding boxes that are likely to belong to an object. Positive and negative anchors 19 | along with anchor box refinement are visualized. 20 | 21 | ![](assets/detection_anchors.png) 22 | 23 | 24 | ##### 2. Bounding Box Refinement 25 | This is an example of final detection boxes (dotted lines) and the refinement applied to them (solid lines) in the second stage. 26 | 27 | ![](assets/detection_refinement.png) 28 | 29 | 30 | ##### 3. Mask Generation 31 | Examples of generated masks. These then get scaled and placed on the image in the right location. 32 | 33 | ![](assets/detection_masks.png) 34 | 35 | 36 | ##### 4. Composing the different pieces into a final result 37 | 38 | ![](assets/detection_final.png) 39 | 40 | ## Requirements 41 | * Python 3 42 | * Pytorch 0.3 43 | * matplotlib, scipy, skimage, h5py 44 | 45 | ## Installation 46 | 1. Clone this repository. 47 | 48 | git clone https://github.com/multimodallearning/pytorch-mask-rcnn.git 49 | 50 | 51 | 2. We use functions from two more repositories that need to be build with the right `--arch` option for cuda support. 52 | The two functions are Non-Maximum Suppression from ruotianluo's [pytorch-faster-rcnn](https://github.com/ruotianluo/pytorch-faster-rcnn) 53 | repository and longcw's [RoiAlign](https://github.com/longcw/RoIAlign.pytorch). 54 | 55 | | GPU | arch | 56 | | --- | --- | 57 | | TitanX | sm_52 | 58 | | GTX 960M | sm_50 | 59 | | GTX 1070 | sm_61 | 60 | | GTX 1080 (Ti) | sm_61 | 61 | 62 | cd nms/src/cuda/ 63 | nvcc -c -o nms_kernel.cu.o nms_kernel.cu -x cu -Xcompiler -fPIC -arch=[arch] 64 | cd ../../ 65 | python build.py 66 | cd ../ 67 | 68 | cd roialign/roi_align/src/cuda/ 69 | nvcc -c -o crop_and_resize_kernel.cu.o crop_and_resize_kernel.cu -x cu -Xcompiler -fPIC -arch=[arch] 70 | cd ../../ 71 | python build.py 72 | cd ../../ 73 | 74 | 3. As we use the [COCO dataset](http://cocodataset.org/#home) install the [Python COCO API](https://github.com/cocodataset/cocoapi) and 75 | create a symlink. 76 | 77 | ln -s /path/to/coco/cocoapi/PythonAPI/pycocotools/ pycocotools 78 | 79 | 4. Download the pretrained models on COCO and ImageNet from [Google Drive](https://drive.google.com/open?id=1LXUgC2IZUYNEoXr05tdqyKFZY0pZyPDc). 80 | 81 | ## Demo 82 | 83 | To test your installation simply run the demo with 84 | 85 | python demo.py 86 | 87 | It works on CPU or GPU and the result should look like this: 88 | 89 | ![](assets/park.png) 90 | 91 | ## Training on COCO 92 | Training and evaluation code is in coco.py. You can run it from the command 93 | line as such: 94 | 95 | # Train a new model starting from pre-trained COCO weights 96 | python coco.py train --dataset=/path/to/coco/ --model=coco 97 | 98 | # Train a new model starting from ImageNet weights 99 | python coco.py train --dataset=/path/to/coco/ --model=imagenet 100 | 101 | # Continue training a model that you had trained earlier 102 | python coco.py train --dataset=/path/to/coco/ --model=/path/to/weights.h5 103 | 104 | # Continue training the last model you trained. This will find 105 | # the last trained weights in the model directory. 106 | python coco.py train --dataset=/path/to/coco/ --model=last 107 | 108 | If you have not yet downloaded the COCO dataset you should run the command 109 | with the download option set, e.g.: 110 | 111 | # Train a new model starting from pre-trained COCO weights 112 | python coco.py train --dataset=/path/to/coco/ --model=coco --download=true 113 | 114 | You can also run the COCO evaluation code with: 115 | 116 | # Run COCO evaluation on the last trained model 117 | python coco.py evaluate --dataset=/path/to/coco/ --model=last 118 | 119 | The training schedule, learning rate, and other parameters can be set in coco.py. 120 | 121 | ## Results 122 | 123 | COCO results for bounding box and segmentation are reported based on training 124 | with the default configuration and backbone initialized with pretrained 125 | ImageNet weights. Used metric is AP on IoU=0.50:0.95. 126 | 127 | | | from scratch | converted from keras | Matterport's Mask_RCNN | Mask R-CNN paper | 128 | | --- | --- | --- | --- | --- | 129 | | bbox | t.b.a. | 0.347 | 0.347 | 0.382 | 130 | | segm | t.b.a. | 0.296 | 0.296 | 0.354 | 131 | 132 | 133 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/assets/detection_anchors.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/assets/detection_anchors.png -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/assets/detection_final.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/assets/detection_final.png -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/assets/detection_masks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/assets/detection_masks.png -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/assets/detection_refinement.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/assets/detection_refinement.png -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/assets/park.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/assets/park.png -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/assets/street.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/assets/street.png -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/config.py: -------------------------------------------------------------------------------- 1 | """ 2 | Mask R-CNN 3 | Base Configurations class. 4 | 5 | Copyright (c) 2017 Matterport, Inc. 6 | Licensed under the MIT License (see LICENSE for details) 7 | Written by Waleed Abdulla 8 | """ 9 | 10 | import math 11 | import numpy as np 12 | import os 13 | 14 | 15 | # Base Configuration Class 16 | # Don't use this class directly. Instead, sub-class it and override 17 | # the configurations you need to change. 18 | 19 | class Config(object): 20 | """Base configuration class. For custom configurations, create a 21 | sub-class that inherits from this one and override properties 22 | that need to be changed. 23 | """ 24 | # Name the configurations. For example, 'COCO', 'Experiment 3', ...etc. 25 | # Useful if your code needs to do things differently depending on which 26 | # experiment is running. 27 | NAME = None # Override in sub-classes 28 | 29 | # Path to pretrained imagenet model 30 | IMAGENET_MODEL_PATH = os.path.join(os.getcwd(), "resnet50_imagenet.pth") 31 | 32 | # NUMBER OF GPUs to use. For CPU use 0 33 | GPU_COUNT = 1 34 | 35 | # Number of images to train with on each GPU. A 12GB GPU can typically 36 | # handle 2 images of 1024x1024px. 37 | # Adjust based on your GPU memory and image sizes. Use the highest 38 | # number that your GPU can handle for best performance. 39 | IMAGES_PER_GPU = 1 40 | 41 | # Number of training steps per epoch 42 | # This doesn't need to match the size of the training set. Tensorboard 43 | # updates are saved at the end of each epoch, so setting this to a 44 | # smaller number means getting more frequent TensorBoard updates. 45 | # Validation stats are also calculated at each epoch end and they 46 | # might take a while, so don't set this too small to avoid spending 47 | # a lot of time on validation stats. 48 | STEPS_PER_EPOCH = 1000 49 | 50 | # Number of validation steps to run at the end of every training epoch. 51 | # A bigger number improves accuracy of validation stats, but slows 52 | # down the training. 53 | VALIDATION_STEPS = 50 54 | 55 | # The strides of each layer of the FPN Pyramid. These values 56 | # are based on a Resnet101 backbone. 57 | BACKBONE_STRIDES = [4, 8, 16, 32, 64] 58 | 59 | # Number of classification classes (including background) 60 | NUM_CLASSES = 1 # Override in sub-classes 61 | 62 | # Length of square anchor side in pixels 63 | RPN_ANCHOR_SCALES = (32, 64, 128, 256, 512) 64 | 65 | # Ratios of anchors at each cell (width/height) 66 | # A value of 1 represents a square anchor, and 0.5 is a wide anchor 67 | RPN_ANCHOR_RATIOS = [0.5, 1, 2] 68 | 69 | # Anchor stride 70 | # If 1 then anchors are created for each cell in the backbone feature map. 71 | # If 2, then anchors are created for every other cell, and so on. 72 | RPN_ANCHOR_STRIDE = 1 73 | 74 | # Non-max suppression threshold to filter RPN proposals. 75 | # You can reduce this during training to generate more propsals. 76 | RPN_NMS_THRESHOLD = 0.7 77 | 78 | # How many anchors per image to use for RPN training 79 | RPN_TRAIN_ANCHORS_PER_IMAGE = 256 80 | 81 | # ROIs kept after non-maximum supression (training and inference) 82 | POST_NMS_ROIS_TRAINING = 2000 83 | POST_NMS_ROIS_INFERENCE = 1000 84 | 85 | # If enabled, resizes instance masks to a smaller size to reduce 86 | # memory load. Recommended when using high-resolution images. 87 | USE_MINI_MASK = True 88 | MINI_MASK_SHAPE = (56, 56) # (height, width) of the mini-mask 89 | 90 | # Input image resing 91 | # Images are resized such that the smallest side is >= IMAGE_MIN_DIM and 92 | # the longest side is <= IMAGE_MAX_DIM. In case both conditions can't 93 | # be satisfied together the IMAGE_MAX_DIM is enforced. 94 | IMAGE_MIN_DIM = 800 95 | IMAGE_MAX_DIM = 1024 96 | # If True, pad images with zeros such that they're (max_dim by max_dim) 97 | IMAGE_PADDING = True # currently, the False option is not supported 98 | 99 | # Image mean (RGB) 100 | MEAN_PIXEL = np.array([123.7, 116.8, 103.9]) 101 | 102 | # Number of ROIs per image to feed to classifier/mask heads 103 | # The Mask RCNN paper uses 512 but often the RPN doesn't generate 104 | # enough positive proposals to fill this and keep a positive:negative 105 | # ratio of 1:3. You can increase the number of proposals by adjusting 106 | # the RPN NMS threshold. 107 | TRAIN_ROIS_PER_IMAGE = 200 108 | 109 | # Percent of positive ROIs used to train classifier/mask heads 110 | ROI_POSITIVE_RATIO = 0.33 111 | 112 | # Pooled ROIs 113 | POOL_SIZE = 7 114 | MASK_POOL_SIZE = 14 115 | MASK_SHAPE = [28, 28] 116 | 117 | # Maximum number of ground truth instances to use in one image 118 | MAX_GT_INSTANCES = 100 119 | 120 | # Bounding box refinement standard deviation for RPN and final detections. 121 | RPN_BBOX_STD_DEV = np.array([0.1, 0.1, 0.2, 0.2]) 122 | BBOX_STD_DEV = np.array([0.1, 0.1, 0.2, 0.2]) 123 | 124 | # Max number of final detections 125 | DETECTION_MAX_INSTANCES = 100 126 | 127 | # Minimum probability value to accept a detected instance 128 | # ROIs below this threshold are skipped 129 | DETECTION_MIN_CONFIDENCE = 0.7 130 | 131 | # Non-maximum suppression threshold for detection 132 | DETECTION_NMS_THRESHOLD = 0.3 133 | 134 | # Learning rate and momentum 135 | # The Mask RCNN paper uses lr=0.02, but on TensorFlow it causes 136 | # weights to explode. Likely due to differences in optimzer 137 | # implementation. 138 | LEARNING_RATE = 0.001 139 | LEARNING_MOMENTUM = 0.9 140 | 141 | # Weight decay regularization 142 | WEIGHT_DECAY = 0.0001 143 | 144 | # Use RPN ROIs or externally generated ROIs for training 145 | # Keep this True for most situations. Set to False if you want to train 146 | # the head branches on ROI generated by code rather than the ROIs from 147 | # the RPN. For example, to debug the classifier head without having to 148 | # train the RPN. 149 | USE_RPN_ROIS = True 150 | 151 | def __init__(self): 152 | """Set values of computed attributes.""" 153 | # Effective batch size 154 | if self.GPU_COUNT > 0: 155 | self.BATCH_SIZE = self.IMAGES_PER_GPU * self.GPU_COUNT 156 | else: 157 | self.BATCH_SIZE = self.IMAGES_PER_GPU 158 | 159 | # Adjust step size based on batch size 160 | self.STEPS_PER_EPOCH = self.BATCH_SIZE * self.STEPS_PER_EPOCH 161 | 162 | # Input image size 163 | self.IMAGE_SHAPE = np.array( 164 | [self.IMAGE_MAX_DIM, self.IMAGE_MAX_DIM, 3]) 165 | 166 | # Compute backbone size from input image size 167 | self.BACKBONE_SHAPES = np.array( 168 | [[int(math.ceil(self.IMAGE_SHAPE[0] / stride)), 169 | int(math.ceil(self.IMAGE_SHAPE[1] / stride))] 170 | for stride in self.BACKBONE_STRIDES]) 171 | 172 | def display(self): 173 | """Display Configuration values.""" 174 | print("\nConfigurations:") 175 | for a in dir(self): 176 | if not a.startswith("__") and not callable(getattr(self, a)): 177 | print("{:30} {}".format(a, getattr(self, a))) 178 | print("\n") 179 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/convert_from_keras.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import collections 3 | import h5py 4 | import torch 5 | 6 | alphabet = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] 7 | 8 | parser = argparse.ArgumentParser(description='Convert keras-mask-rcnn model to pytorch-mask-rcnn model') 9 | parser.add_argument('--keras_model', 10 | help='the path of the keras model', 11 | default=None, type=str) 12 | parser.add_argument('--pytorch_model', 13 | help='the path of the pytorch model', 14 | default=None, type=str) 15 | 16 | args = parser.parse_args() 17 | 18 | f = h5py.File(args.keras_model, mode='r') 19 | state_dict = collections.OrderedDict(); 20 | for group_name, group in f.items(): 21 | if len(group.items())!=0: 22 | for layer_name, layer in group.items(): 23 | for weight_name, weight in layer.items(): 24 | state_dict[layer_name+'.'+weight_name] = weight.value 25 | 26 | replace_dict = collections.OrderedDict([ 27 | ('beta:0', 'bias'), \ 28 | ('gamma:0', 'weight'), \ 29 | ('moving_mean:0', 'running_mean'),\ 30 | ('moving_variance:0', 'running_var'),\ 31 | ('bias:0', 'bias'), \ 32 | ('kernel:0', 'weight'), \ 33 | ('mrcnn_mask_', 'mask.'), \ 34 | ('mrcnn_mask', 'mask.conv5'), \ 35 | ('mrcnn_class_', 'classifier.'), \ 36 | ('logits', 'linear_class'), \ 37 | ('mrcnn_bbox_fc', 'classifier.linear_bbox'), \ 38 | ('rpn_', 'rpn.'), \ 39 | ('class_raw', 'conv_class'), \ 40 | ('bbox_pred', 'conv_bbox'), \ 41 | ('bn_conv1', 'fpn.C1.1'), \ 42 | ('bn2a_branch1', 'fpn.C2.0.downsample.1'), \ 43 | ('res2a_branch1', 'fpn.C2.0.downsample.0'), \ 44 | ('bn3a_branch1', 'fpn.C3.0.downsample.1'), \ 45 | ('res3a_branch1', 'fpn.C3.0.downsample.0'), \ 46 | ('bn4a_branch1', 'fpn.C4.0.downsample.1'), \ 47 | ('res4a_branch1', 'fpn.C4.0.downsample.0'), \ 48 | ('bn5a_branch1', 'fpn.C5.0.downsample.1'), \ 49 | ('res5a_branch1', 'fpn.C5.0.downsample.0'), \ 50 | ('fpn_c2p2', 'fpn.P2_conv1'), \ 51 | ('fpn_c3p3', 'fpn.P3_conv1'), \ 52 | ('fpn_c4p4', 'fpn.P4_conv1'), \ 53 | ('fpn_c5p5', 'fpn.P5_conv1'), \ 54 | ('fpn_p2', 'fpn.P2_conv2.1'), \ 55 | ('fpn_p3', 'fpn.P3_conv2.1'), \ 56 | ('fpn_p4', 'fpn.P4_conv2.1'), \ 57 | ('fpn_p5', 'fpn.P5_conv2.1'), \ 58 | ]) 59 | 60 | replace_exact_dict = collections.OrderedDict([ 61 | ('conv1.bias', 'fpn.C1.0.bias'), \ 62 | ('conv1.weight', 'fpn.C1.0.weight'), \ 63 | ]) 64 | 65 | for block in range(3): 66 | for branch in range(3): 67 | replace_dict['bn2' + alphabet[block] + '_branch2' + alphabet[branch]] = 'fpn.C2.' + str(block) + '.bn' + str( 68 | branch+1) 69 | replace_dict['res2'+alphabet[block]+'_branch2'+alphabet[branch]] = 'fpn.C2.'+str(block)+'.conv'+str(branch+1) 70 | 71 | for block in range(4): 72 | for branch in range(3): 73 | replace_dict['bn3' + alphabet[block] + '_branch2' + alphabet[branch]] = 'fpn.C3.' + str(block) + '.bn' + str( 74 | branch+1) 75 | replace_dict['res3'+alphabet[block]+'_branch2'+alphabet[branch]] = 'fpn.C3.'+str(block)+'.conv'+str(branch+1) 76 | 77 | for block in range(23): 78 | for branch in range(3): 79 | replace_dict['bn4' + alphabet[block] + '_branch2' + alphabet[branch]] = 'fpn.C4.' + str(block) + '.bn' + str( 80 | branch+1) 81 | replace_dict['res4'+alphabet[block]+'_branch2'+alphabet[branch]] = 'fpn.C4.'+str(block)+'.conv'+str(branch+1) 82 | 83 | for block in range(3): 84 | for branch in range(3): 85 | replace_dict['bn5' + alphabet[block] + '_branch2' + alphabet[branch]] = 'fpn.C5.' + str(block) + '.bn' + str(branch+1) 86 | replace_dict['res5'+ alphabet[block] + '_branch2' + alphabet[branch]] = 'fpn.C5.' + str(block) + '.conv' + str(branch+1) 87 | 88 | 89 | for orig, repl in replace_dict.items(): 90 | for key in list(state_dict.keys()): 91 | if orig in key: 92 | state_dict[key.replace(orig, repl)] = state_dict[key] 93 | del state_dict[key] 94 | 95 | for orig, repl in replace_exact_dict.items(): 96 | for key in list(state_dict.keys()): 97 | if orig == key: 98 | state_dict[repl] = state_dict[key] 99 | del state_dict[key] 100 | 101 | for weight_name in list(state_dict.keys()): 102 | if state_dict[weight_name].ndim == 4: 103 | state_dict[weight_name] = state_dict[weight_name].transpose((3, 2, 0, 1)).copy(order='C') 104 | if state_dict[weight_name].ndim == 2: 105 | state_dict[weight_name] = state_dict[weight_name].transpose((1, 0)).copy(order='C') 106 | 107 | for weight_name in list(state_dict.keys()): 108 | state_dict[weight_name] = torch.from_numpy(state_dict[weight_name]) 109 | 110 | torch.save(state_dict, args.pytorch_model) -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/demo.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import random 4 | import math 5 | import numpy as np 6 | import skimage.io 7 | import matplotlib 8 | import matplotlib.pyplot as plt 9 | 10 | import coco 11 | import utils 12 | import model as modellib 13 | import visualize 14 | 15 | import torch 16 | 17 | 18 | # Root directory of the project 19 | ROOT_DIR = os.getcwd() 20 | 21 | # Directory to save logs and trained model 22 | MODEL_DIR = os.path.join(ROOT_DIR, "logs") 23 | 24 | # Path to trained weights file 25 | # Download this file and place in the root of your 26 | # project (See README file for details) 27 | COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.pth") 28 | 29 | # Directory of images to run detection on 30 | IMAGE_DIR = os.path.join(ROOT_DIR, "images") 31 | 32 | class InferenceConfig(coco.CocoConfig): 33 | # Set batch size to 1 since we'll be running inference on 34 | # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU 35 | # GPU_COUNT = 0 for CPU 36 | GPU_COUNT = 1 37 | IMAGES_PER_GPU = 1 38 | 39 | config = InferenceConfig() 40 | config.display() 41 | 42 | # Create model object. 43 | model = modellib.MaskRCNN(model_dir=MODEL_DIR, config=config) 44 | if config.GPU_COUNT: 45 | model = model.cuda() 46 | 47 | # Load weights trained on MS-COCO 48 | model.load_state_dict(torch.load(COCO_MODEL_PATH)) 49 | 50 | # COCO Class names 51 | # Index of the class in the list is its ID. For example, to get ID of 52 | # the teddy bear class, use: class_names.index('teddy bear') 53 | class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 54 | 'bus', 'train', 'truck', 'boat', 'traffic light', 55 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 56 | 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 57 | 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 58 | 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 59 | 'kite', 'baseball bat', 'baseball glove', 'skateboard', 60 | 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 61 | 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 62 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 63 | 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 64 | 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 65 | 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 66 | 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 67 | 'teddy bear', 'hair drier', 'toothbrush'] 68 | 69 | # Load a random image from the images folder 70 | file_names = next(os.walk(IMAGE_DIR))[2] 71 | image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names))) 72 | 73 | # Run detection 74 | results = model.detect([image]) 75 | 76 | # Visualize results 77 | r = results[0] 78 | visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], 79 | class_names, r['scores']) 80 | plt.show() -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/1045023827_4ec3e8ba5c_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/1045023827_4ec3e8ba5c_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/12283150_12d37e6389_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/12283150_12d37e6389_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/2383514521_1fc8d7b0de_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/2383514521_1fc8d7b0de_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/2502287818_41e4b0c4fb_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/2502287818_41e4b0c4fb_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/2516944023_d00345997d_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/2516944023_d00345997d_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/25691390_f9944f61b5_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/25691390_f9944f61b5_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/262985539_1709e54576_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/262985539_1709e54576_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/3132016470_c27baa00e8_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/3132016470_c27baa00e8_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/3627527276_6fe8cd9bfe_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/3627527276_6fe8cd9bfe_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/3651581213_f81963d1dd_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/3651581213_f81963d1dd_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/3800883468_12af3c0b50_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/3800883468_12af3c0b50_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/3862500489_6fd195d183_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/3862500489_6fd195d183_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/3878153025_8fde829928_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/3878153025_8fde829928_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/4410436637_7b0ca36ee7_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/4410436637_7b0ca36ee7_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/4782628554_668bc31826_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/4782628554_668bc31826_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/5951960966_d4e1cda5d0_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/5951960966_d4e1cda5d0_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/6584515005_fce9cec486_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/6584515005_fce9cec486_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/6821351586_59aa0dc110_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/6821351586_59aa0dc110_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/7581246086_cf7bbb7255_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/7581246086_cf7bbb7255_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/7933423348_c30bd9bd4e_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/7933423348_c30bd9bd4e_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/8053677163_d4c8f416be_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/8053677163_d4c8f416be_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/8239308689_efa6c11b08_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/8239308689_efa6c11b08_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/8433365521_9252889f9a_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/8433365521_9252889f9a_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/8512296263_5fc5458e20_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/8512296263_5fc5458e20_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/8699757338_c3941051b6_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/8699757338_c3941051b6_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/8734543718_37f6b8bd45_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/8734543718_37f6b8bd45_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/8829708882_48f263491e_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/8829708882_48f263491e_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/9118579087_f9ffa19e63_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/9118579087_f9ffa19e63_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/images/9247489789_132c0d534a_z.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/images/9247489789_132c0d534a_z.jpg -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/nms/__init__.py -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/build.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | from torch.utils.ffi import create_extension 4 | 5 | 6 | sources = ['src/nms.c'] 7 | headers = ['src/nms.h'] 8 | defines = [] 9 | with_cuda = False 10 | 11 | if torch.cuda.is_available(): 12 | print('Including CUDA code.') 13 | sources += ['src/nms_cuda.c'] 14 | headers += ['src/nms_cuda.h'] 15 | defines += [('WITH_CUDA', None)] 16 | with_cuda = True 17 | 18 | this_file = os.path.dirname(os.path.realpath(__file__)) 19 | print(this_file) 20 | extra_objects = ['src/cuda/nms_kernel.cu.o'] 21 | extra_objects = [os.path.join(this_file, fname) for fname in extra_objects] 22 | 23 | ffi = create_extension( 24 | '_ext.nms', 25 | headers=headers, 26 | sources=sources, 27 | define_macros=defines, 28 | relative_to=__file__, 29 | with_cuda=with_cuda, 30 | extra_objects=extra_objects 31 | ) 32 | 33 | if __name__ == '__main__': 34 | ffi.build() 35 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/nms_wrapper.py: -------------------------------------------------------------------------------- 1 | # -------------------------------------------------------- 2 | # Fast R-CNN 3 | # Copyright (c) 2015 Microsoft 4 | # Licensed under The MIT License [see LICENSE for details] 5 | # Written by Ross Girshick 6 | # -------------------------------------------------------- 7 | from __future__ import absolute_import 8 | from __future__ import division 9 | from __future__ import print_function 10 | 11 | from nms.pth_nms import pth_nms 12 | 13 | 14 | def nms(dets, thresh): 15 | """Dispatch to either CPU or GPU NMS implementations. 16 | Accept dets as tensor""" 17 | return pth_nms(dets, thresh) 18 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/pth_nms.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from ._ext import nms 3 | import numpy as np 4 | 5 | def pth_nms(dets, thresh): 6 | """ 7 | dets has to be a tensor 8 | """ 9 | if not dets.is_cuda: 10 | x1 = dets[:, 1] 11 | y1 = dets[:, 0] 12 | x2 = dets[:, 3] 13 | y2 = dets[:, 2] 14 | scores = dets[:, 4] 15 | 16 | areas = (x2 - x1 + 1) * (y2 - y1 + 1) 17 | order = scores.sort(0, descending=True)[1] 18 | # order = torch.from_numpy(np.ascontiguousarray(scores.numpy().argsort()[::-1])).long() 19 | 20 | keep = torch.LongTensor(dets.size(0)) 21 | num_out = torch.LongTensor(1) 22 | nms.cpu_nms(keep, num_out, dets, order, areas, thresh) 23 | 24 | return keep[:num_out[0]] 25 | else: 26 | x1 = dets[:, 1] 27 | y1 = dets[:, 0] 28 | x2 = dets[:, 3] 29 | y2 = dets[:, 2] 30 | scores = dets[:, 4] 31 | 32 | dets_temp = torch.FloatTensor(dets.size()).cuda() 33 | dets_temp[:, 0] = dets[:, 1] 34 | dets_temp[:, 1] = dets[:, 0] 35 | dets_temp[:, 2] = dets[:, 3] 36 | dets_temp[:, 3] = dets[:, 2] 37 | dets_temp[:, 4] = dets[:, 4] 38 | 39 | areas = (x2 - x1 + 1) * (y2 - y1 + 1) 40 | order = scores.sort(0, descending=True)[1] 41 | # order = torch.from_numpy(np.ascontiguousarray(scores.cpu().numpy().argsort()[::-1])).long().cuda() 42 | 43 | dets = dets[order].contiguous() 44 | 45 | keep = torch.LongTensor(dets.size(0)) 46 | num_out = torch.LongTensor(1) 47 | # keep = torch.cuda.LongTensor(dets.size(0)) 48 | # num_out = torch.cuda.LongTensor(1) 49 | nms.gpu_nms(keep, num_out, dets_temp, thresh) 50 | 51 | return order[keep[:num_out[0]].cuda()].contiguous() 52 | # return order[keep[:num_out[0]]].contiguous() 53 | 54 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/src/cuda/nms_kernel.cu: -------------------------------------------------------------------------------- 1 | // ------------------------------------------------------------------ 2 | // Faster R-CNN 3 | // Copyright (c) 2015 Microsoft 4 | // Licensed under The MIT License [see fast-rcnn/LICENSE for details] 5 | // Written by Shaoqing Ren 6 | // ------------------------------------------------------------------ 7 | #ifdef __cplusplus 8 | extern "C" { 9 | #endif 10 | 11 | #include 12 | #include 13 | #include 14 | #include "nms_kernel.h" 15 | 16 | __device__ inline float devIoU(float const * const a, float const * const b) { 17 | float left = fmaxf(a[0], b[0]), right = fminf(a[2], b[2]); 18 | float top = fmaxf(a[1], b[1]), bottom = fminf(a[3], b[3]); 19 | float width = fmaxf(right - left + 1, 0.f), height = fmaxf(bottom - top + 1, 0.f); 20 | float interS = width * height; 21 | float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1); 22 | float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1); 23 | return interS / (Sa + Sb - interS); 24 | } 25 | 26 | __global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh, 27 | const float *dev_boxes, unsigned long long *dev_mask) { 28 | const int row_start = blockIdx.y; 29 | const int col_start = blockIdx.x; 30 | 31 | // if (row_start > col_start) return; 32 | 33 | const int row_size = 34 | fminf(n_boxes - row_start * threadsPerBlock, threadsPerBlock); 35 | const int col_size = 36 | fminf(n_boxes - col_start * threadsPerBlock, threadsPerBlock); 37 | 38 | __shared__ float block_boxes[threadsPerBlock * 5]; 39 | if (threadIdx.x < col_size) { 40 | block_boxes[threadIdx.x * 5 + 0] = 41 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0]; 42 | block_boxes[threadIdx.x * 5 + 1] = 43 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1]; 44 | block_boxes[threadIdx.x * 5 + 2] = 45 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2]; 46 | block_boxes[threadIdx.x * 5 + 3] = 47 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3]; 48 | block_boxes[threadIdx.x * 5 + 4] = 49 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4]; 50 | } 51 | __syncthreads(); 52 | 53 | if (threadIdx.x < row_size) { 54 | const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x; 55 | const float *cur_box = dev_boxes + cur_box_idx * 5; 56 | int i = 0; 57 | unsigned long long t = 0; 58 | int start = 0; 59 | if (row_start == col_start) { 60 | start = threadIdx.x + 1; 61 | } 62 | for (i = start; i < col_size; i++) { 63 | if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) { 64 | t |= 1ULL << i; 65 | } 66 | } 67 | const int col_blocks = DIVUP(n_boxes, threadsPerBlock); 68 | dev_mask[cur_box_idx * col_blocks + col_start] = t; 69 | } 70 | } 71 | 72 | 73 | void _nms(int boxes_num, float * boxes_dev, 74 | unsigned long long * mask_dev, float nms_overlap_thresh) { 75 | 76 | dim3 blocks(DIVUP(boxes_num, threadsPerBlock), 77 | DIVUP(boxes_num, threadsPerBlock)); 78 | dim3 threads(threadsPerBlock); 79 | nms_kernel<<>>(boxes_num, 80 | nms_overlap_thresh, 81 | boxes_dev, 82 | mask_dev); 83 | } 84 | 85 | #ifdef __cplusplus 86 | } 87 | #endif 88 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/src/cuda/nms_kernel.h: -------------------------------------------------------------------------------- 1 | #ifndef _NMS_KERNEL 2 | #define _NMS_KERNEL 3 | 4 | #ifdef __cplusplus 5 | extern "C" { 6 | #endif 7 | 8 | #define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0)) 9 | int const threadsPerBlock = sizeof(unsigned long long) * 8; 10 | 11 | void _nms(int boxes_num, float * boxes_dev, 12 | unsigned long long * mask_dev, float nms_overlap_thresh); 13 | 14 | #ifdef __cplusplus 15 | } 16 | #endif 17 | 18 | #endif 19 | 20 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/src/nms.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | 4 | int cpu_nms(THLongTensor * keep_out, THLongTensor * num_out, THFloatTensor * boxes, THLongTensor * order, THFloatTensor * areas, float nms_overlap_thresh) { 5 | // boxes has to be sorted 6 | THArgCheck(THLongTensor_isContiguous(keep_out), 0, "keep_out must be contiguous"); 7 | THArgCheck(THLongTensor_isContiguous(boxes), 2, "boxes must be contiguous"); 8 | THArgCheck(THLongTensor_isContiguous(order), 3, "order must be contiguous"); 9 | THArgCheck(THLongTensor_isContiguous(areas), 4, "areas must be contiguous"); 10 | // Number of ROIs 11 | long boxes_num = THFloatTensor_size(boxes, 0); 12 | long boxes_dim = THFloatTensor_size(boxes, 1); 13 | 14 | long * keep_out_flat = THLongTensor_data(keep_out); 15 | float * boxes_flat = THFloatTensor_data(boxes); 16 | long * order_flat = THLongTensor_data(order); 17 | float * areas_flat = THFloatTensor_data(areas); 18 | 19 | THByteTensor* suppressed = THByteTensor_newWithSize1d(boxes_num); 20 | THByteTensor_fill(suppressed, 0); 21 | unsigned char * suppressed_flat = THByteTensor_data(suppressed); 22 | 23 | // nominal indices 24 | int i, j; 25 | // sorted indices 26 | int _i, _j; 27 | // temp variables for box i's (the box currently under consideration) 28 | float ix1, iy1, ix2, iy2, iarea; 29 | // variables for computing overlap with box j (lower scoring box) 30 | float xx1, yy1, xx2, yy2; 31 | float w, h; 32 | float inter, ovr; 33 | 34 | long num_to_keep = 0; 35 | for (_i=0; _i < boxes_num; ++_i) { 36 | i = order_flat[_i]; 37 | if (suppressed_flat[i] == 1) { 38 | continue; 39 | } 40 | keep_out_flat[num_to_keep++] = i; 41 | ix1 = boxes_flat[i * boxes_dim]; 42 | iy1 = boxes_flat[i * boxes_dim + 1]; 43 | ix2 = boxes_flat[i * boxes_dim + 2]; 44 | iy2 = boxes_flat[i * boxes_dim + 3]; 45 | iarea = areas_flat[i]; 46 | for (_j = _i + 1; _j < boxes_num; ++_j) { 47 | j = order_flat[_j]; 48 | if (suppressed_flat[j] == 1) { 49 | continue; 50 | } 51 | xx1 = fmaxf(ix1, boxes_flat[j * boxes_dim]); 52 | yy1 = fmaxf(iy1, boxes_flat[j * boxes_dim + 1]); 53 | xx2 = fminf(ix2, boxes_flat[j * boxes_dim + 2]); 54 | yy2 = fminf(iy2, boxes_flat[j * boxes_dim + 3]); 55 | w = fmaxf(0.0, xx2 - xx1 + 1); 56 | h = fmaxf(0.0, yy2 - yy1 + 1); 57 | inter = w * h; 58 | ovr = inter / (iarea + areas_flat[j] - inter); 59 | if (ovr >= nms_overlap_thresh) { 60 | suppressed_flat[j] = 1; 61 | } 62 | } 63 | } 64 | 65 | long *num_out_flat = THLongTensor_data(num_out); 66 | *num_out_flat = num_to_keep; 67 | THByteTensor_free(suppressed); 68 | return 1; 69 | } -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/src/nms.h: -------------------------------------------------------------------------------- 1 | int cpu_nms(THLongTensor * keep_out, THLongTensor * num_out, THFloatTensor * boxes, THLongTensor * order, THFloatTensor * areas, float nms_overlap_thresh); -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/src/nms_cuda.c: -------------------------------------------------------------------------------- 1 | // ------------------------------------------------------------------ 2 | // Faster R-CNN 3 | // Copyright (c) 2015 Microsoft 4 | // Licensed under The MIT License [see fast-rcnn/LICENSE for details] 5 | // Written by Shaoqing Ren 6 | // ------------------------------------------------------------------ 7 | #include 8 | #include 9 | #include 10 | #include 11 | 12 | #include "cuda/nms_kernel.h" 13 | 14 | 15 | extern THCState *state; 16 | 17 | int gpu_nms(THLongTensor * keep, THLongTensor* num_out, THCudaTensor * boxes, float nms_overlap_thresh) { 18 | // boxes has to be sorted 19 | THArgCheck(THLongTensor_isContiguous(keep), 0, "boxes must be contiguous"); 20 | THArgCheck(THCudaTensor_isContiguous(state, boxes), 2, "boxes must be contiguous"); 21 | // Number of ROIs 22 | int boxes_num = THCudaTensor_size(state, boxes, 0); 23 | int boxes_dim = THCudaTensor_size(state, boxes, 1); 24 | 25 | float* boxes_flat = THCudaTensor_data(state, boxes); 26 | 27 | const int col_blocks = DIVUP(boxes_num, threadsPerBlock); 28 | THCudaLongTensor * mask = THCudaLongTensor_newWithSize2d(state, boxes_num, col_blocks); 29 | unsigned long long* mask_flat = THCudaLongTensor_data(state, mask); 30 | 31 | _nms(boxes_num, boxes_flat, mask_flat, nms_overlap_thresh); 32 | 33 | THLongTensor * mask_cpu = THLongTensor_newWithSize2d(boxes_num, col_blocks); 34 | THLongTensor_copyCuda(state, mask_cpu, mask); 35 | THCudaLongTensor_free(state, mask); 36 | 37 | unsigned long long * mask_cpu_flat = THLongTensor_data(mask_cpu); 38 | 39 | THLongTensor * remv_cpu = THLongTensor_newWithSize1d(col_blocks); 40 | unsigned long long* remv_cpu_flat = THLongTensor_data(remv_cpu); 41 | THLongTensor_fill(remv_cpu, 0); 42 | 43 | long * keep_flat = THLongTensor_data(keep); 44 | long num_to_keep = 0; 45 | 46 | int i, j; 47 | for (i = 0; i < boxes_num; i++) { 48 | int nblock = i / threadsPerBlock; 49 | int inblock = i % threadsPerBlock; 50 | 51 | if (!(remv_cpu_flat[nblock] & (1ULL << inblock))) { 52 | keep_flat[num_to_keep++] = i; 53 | unsigned long long *p = &mask_cpu_flat[0] + i * col_blocks; 54 | for (j = nblock; j < col_blocks; j++) { 55 | remv_cpu_flat[j] |= p[j]; 56 | } 57 | } 58 | } 59 | 60 | long * num_out_flat = THLongTensor_data(num_out); 61 | * num_out_flat = num_to_keep; 62 | 63 | THLongTensor_free(mask_cpu); 64 | THLongTensor_free(remv_cpu); 65 | 66 | return 1; 67 | } 68 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/nms/src/nms_cuda.h: -------------------------------------------------------------------------------- 1 | int gpu_nms(THLongTensor * keep_out, THLongTensor* num_out, THCudaTensor * boxes, float nms_overlap_thresh); -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/roialign/__init__.py -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benoit-cty/decentralized_AI/fd6917700f28ef1dc25d59a4aad5913e47b2a099/openmined/pytorch-mask-rcnn-master/roialign/roi_align/__init__.py -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/build.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | from torch.utils.ffi import create_extension 4 | 5 | 6 | sources = ['src/crop_and_resize.c'] 7 | headers = ['src/crop_and_resize.h'] 8 | defines = [] 9 | with_cuda = False 10 | 11 | extra_objects = [] 12 | if torch.cuda.is_available(): 13 | print('Including CUDA code.') 14 | sources += ['src/crop_and_resize_gpu.c'] 15 | headers += ['src/crop_and_resize_gpu.h'] 16 | defines += [('WITH_CUDA', None)] 17 | extra_objects += ['src/cuda/crop_and_resize_kernel.cu.o'] 18 | with_cuda = True 19 | 20 | extra_compile_args = ['-fopenmp', '-std=c99'] 21 | 22 | this_file = os.path.dirname(os.path.realpath(__file__)) 23 | print(this_file) 24 | sources = [os.path.join(this_file, fname) for fname in sources] 25 | headers = [os.path.join(this_file, fname) for fname in headers] 26 | extra_objects = [os.path.join(this_file, fname) for fname in extra_objects] 27 | 28 | ffi = create_extension( 29 | '_ext.crop_and_resize', 30 | headers=headers, 31 | sources=sources, 32 | define_macros=defines, 33 | relative_to=__file__, 34 | with_cuda=with_cuda, 35 | extra_objects=extra_objects, 36 | extra_compile_args=extra_compile_args 37 | ) 38 | 39 | if __name__ == '__main__': 40 | ffi.build() 41 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/crop_and_resize.py: -------------------------------------------------------------------------------- 1 | import math 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | from torch.autograd import Function 6 | 7 | from ._ext import crop_and_resize as _backend 8 | 9 | 10 | class CropAndResizeFunction(Function): 11 | 12 | def __init__(self, crop_height, crop_width, extrapolation_value=0): 13 | self.crop_height = crop_height 14 | self.crop_width = crop_width 15 | self.extrapolation_value = extrapolation_value 16 | 17 | def forward(self, image, boxes, box_ind): 18 | crops = torch.zeros_like(image) 19 | 20 | if image.is_cuda: 21 | _backend.crop_and_resize_gpu_forward( 22 | image, boxes, box_ind, 23 | self.extrapolation_value, self.crop_height, self.crop_width, crops) 24 | else: 25 | _backend.crop_and_resize_forward( 26 | image, boxes, box_ind, 27 | self.extrapolation_value, self.crop_height, self.crop_width, crops) 28 | 29 | # save for backward 30 | self.im_size = image.size() 31 | self.save_for_backward(boxes, box_ind) 32 | 33 | return crops 34 | 35 | def backward(self, grad_outputs): 36 | boxes, box_ind = self.saved_tensors 37 | 38 | grad_outputs = grad_outputs.contiguous() 39 | grad_image = torch.zeros_like(grad_outputs).resize_(*self.im_size) 40 | 41 | if grad_outputs.is_cuda: 42 | _backend.crop_and_resize_gpu_backward( 43 | grad_outputs, boxes, box_ind, grad_image 44 | ) 45 | else: 46 | _backend.crop_and_resize_backward( 47 | grad_outputs, boxes, box_ind, grad_image 48 | ) 49 | 50 | return grad_image, None, None 51 | 52 | 53 | class CropAndResize(nn.Module): 54 | """ 55 | Crop and resize ported from tensorflow 56 | See more details on https://www.tensorflow.org/api_docs/python/tf/image/crop_and_resize 57 | """ 58 | 59 | def __init__(self, crop_height, crop_width, extrapolation_value=0): 60 | super(CropAndResize, self).__init__() 61 | 62 | self.crop_height = crop_height 63 | self.crop_width = crop_width 64 | self.extrapolation_value = extrapolation_value 65 | 66 | def forward(self, image, boxes, box_ind): 67 | return CropAndResizeFunction(self.crop_height, self.crop_width, self.extrapolation_value)(image, boxes, box_ind) 68 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/roi_align.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import nn 3 | 4 | from .crop_and_resize import CropAndResizeFunction, CropAndResize 5 | 6 | 7 | class RoIAlign(nn.Module): 8 | 9 | def __init__(self, crop_height, crop_width, extrapolation_value=0, transform_fpcoor=True): 10 | super(RoIAlign, self).__init__() 11 | 12 | self.crop_height = crop_height 13 | self.crop_width = crop_width 14 | self.extrapolation_value = extrapolation_value 15 | self.transform_fpcoor = transform_fpcoor 16 | 17 | def forward(self, featuremap, boxes, box_ind): 18 | """ 19 | RoIAlign based on crop_and_resize. 20 | See more details on https://github.com/ppwwyyxx/tensorpack/blob/6d5ba6a970710eaaa14b89d24aace179eb8ee1af/examples/FasterRCNN/model.py#L301 21 | :param featuremap: NxCxHxW 22 | :param boxes: Mx4 float box with (x1, y1, x2, y2) **without normalization** 23 | :param box_ind: M 24 | :return: MxCxoHxoW 25 | """ 26 | x1, y1, x2, y2 = torch.split(boxes, 1, dim=1) 27 | image_height, image_width = featuremap.size()[2:4] 28 | 29 | if self.transform_fpcoor: 30 | spacing_w = (x2 - x1) / float(self.crop_width) 31 | spacing_h = (y2 - y1) / float(self.crop_height) 32 | 33 | nx0 = (x1 + spacing_w / 2 - 0.5) / float(image_width - 1) 34 | ny0 = (y1 + spacing_h / 2 - 0.5) / float(image_height - 1) 35 | nw = spacing_w * float(self.crop_width - 1) / float(image_width - 1) 36 | nh = spacing_h * float(self.crop_height - 1) / float(image_height - 1) 37 | 38 | boxes = torch.cat((ny0, nx0, ny0 + nh, nx0 + nw), 1) 39 | else: 40 | x1 = x1 / float(image_width - 1) 41 | x2 = x2 / float(image_width - 1) 42 | y1 = y1 / float(image_height - 1) 43 | y2 = y2 / float(image_height - 1) 44 | boxes = torch.cat((y1, x1, y2, x2), 1) 45 | 46 | boxes = boxes.detach().contiguous() 47 | box_ind = box_ind.detach() 48 | return CropAndResizeFunction(self.crop_height, self.crop_width, self.extrapolation_value)(featuremap, boxes, box_ind) 49 | -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/src/crop_and_resize.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | 5 | 6 | void CropAndResizePerBox( 7 | const float * image_data, 8 | const int batch_size, 9 | const int depth, 10 | const int image_height, 11 | const int image_width, 12 | 13 | const float * boxes_data, 14 | const int * box_index_data, 15 | const int start_box, 16 | const int limit_box, 17 | 18 | float * corps_data, 19 | const int crop_height, 20 | const int crop_width, 21 | const float extrapolation_value 22 | ) { 23 | const int image_channel_elements = image_height * image_width; 24 | const int image_elements = depth * image_channel_elements; 25 | 26 | const int channel_elements = crop_height * crop_width; 27 | const int crop_elements = depth * channel_elements; 28 | 29 | int b; 30 | #pragma omp parallel for 31 | for (b = start_box; b < limit_box; ++b) { 32 | const float * box = boxes_data + b * 4; 33 | const float y1 = box[0]; 34 | const float x1 = box[1]; 35 | const float y2 = box[2]; 36 | const float x2 = box[3]; 37 | 38 | const int b_in = box_index_data[b]; 39 | if (b_in < 0 || b_in >= batch_size) { 40 | printf("Error: batch_index %d out of range [0, %d)\n", b_in, batch_size); 41 | exit(-1); 42 | } 43 | 44 | const float height_scale = 45 | (crop_height > 1) 46 | ? (y2 - y1) * (image_height - 1) / (crop_height - 1) 47 | : 0; 48 | const float width_scale = 49 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) 50 | : 0; 51 | 52 | for (int y = 0; y < crop_height; ++y) 53 | { 54 | const float in_y = (crop_height > 1) 55 | ? y1 * (image_height - 1) + y * height_scale 56 | : 0.5 * (y1 + y2) * (image_height - 1); 57 | 58 | if (in_y < 0 || in_y > image_height - 1) 59 | { 60 | for (int x = 0; x < crop_width; ++x) 61 | { 62 | for (int d = 0; d < depth; ++d) 63 | { 64 | // crops(b, y, x, d) = extrapolation_value; 65 | corps_data[crop_elements * b + channel_elements * d + y * crop_width + x] = extrapolation_value; 66 | } 67 | } 68 | continue; 69 | } 70 | 71 | const int top_y_index = floorf(in_y); 72 | const int bottom_y_index = ceilf(in_y); 73 | const float y_lerp = in_y - top_y_index; 74 | 75 | for (int x = 0; x < crop_width; ++x) 76 | { 77 | const float in_x = (crop_width > 1) 78 | ? x1 * (image_width - 1) + x * width_scale 79 | : 0.5 * (x1 + x2) * (image_width - 1); 80 | if (in_x < 0 || in_x > image_width - 1) 81 | { 82 | for (int d = 0; d < depth; ++d) 83 | { 84 | corps_data[crop_elements * b + channel_elements * d + y * crop_width + x] = extrapolation_value; 85 | } 86 | continue; 87 | } 88 | 89 | const int left_x_index = floorf(in_x); 90 | const int right_x_index = ceilf(in_x); 91 | const float x_lerp = in_x - left_x_index; 92 | 93 | for (int d = 0; d < depth; ++d) 94 | { 95 | const float *pimage = image_data + b_in * image_elements + d * image_channel_elements; 96 | 97 | const float top_left = pimage[top_y_index * image_width + left_x_index]; 98 | const float top_right = pimage[top_y_index * image_width + right_x_index]; 99 | const float bottom_left = pimage[bottom_y_index * image_width + left_x_index]; 100 | const float bottom_right = pimage[bottom_y_index * image_width + right_x_index]; 101 | 102 | const float top = top_left + (top_right - top_left) * x_lerp; 103 | const float bottom = 104 | bottom_left + (bottom_right - bottom_left) * x_lerp; 105 | 106 | corps_data[crop_elements * b + channel_elements * d + y * crop_width + x] = top + (bottom - top) * y_lerp; 107 | } 108 | } // end for x 109 | } // end for y 110 | } // end for b 111 | 112 | } 113 | 114 | 115 | void crop_and_resize_forward( 116 | THFloatTensor * image, 117 | THFloatTensor * boxes, // [y1, x1, y2, x2] 118 | THIntTensor * box_index, // range in [0, batch_size) 119 | const float extrapolation_value, 120 | const int crop_height, 121 | const int crop_width, 122 | THFloatTensor * crops 123 | ) { 124 | const int batch_size = image->size[0]; 125 | const int depth = image->size[1]; 126 | const int image_height = image->size[2]; 127 | const int image_width = image->size[3]; 128 | 129 | const int num_boxes = boxes->size[0]; 130 | 131 | // init output space 132 | THFloatTensor_resize4d(crops, num_boxes, depth, crop_height, crop_width); 133 | THFloatTensor_zero(crops); 134 | 135 | // crop_and_resize for each box 136 | CropAndResizePerBox( 137 | THFloatTensor_data(image), 138 | batch_size, 139 | depth, 140 | image_height, 141 | image_width, 142 | 143 | THFloatTensor_data(boxes), 144 | THIntTensor_data(box_index), 145 | 0, 146 | num_boxes, 147 | 148 | THFloatTensor_data(crops), 149 | crop_height, 150 | crop_width, 151 | extrapolation_value 152 | ); 153 | 154 | } 155 | 156 | 157 | void crop_and_resize_backward( 158 | THFloatTensor * grads, 159 | THFloatTensor * boxes, // [y1, x1, y2, x2] 160 | THIntTensor * box_index, // range in [0, batch_size) 161 | THFloatTensor * grads_image // resize to [bsize, c, hc, wc] 162 | ) 163 | { 164 | // shape 165 | const int batch_size = grads_image->size[0]; 166 | const int depth = grads_image->size[1]; 167 | const int image_height = grads_image->size[2]; 168 | const int image_width = grads_image->size[3]; 169 | 170 | const int num_boxes = grads->size[0]; 171 | const int crop_height = grads->size[2]; 172 | const int crop_width = grads->size[3]; 173 | 174 | // n_elements 175 | const int image_channel_elements = image_height * image_width; 176 | const int image_elements = depth * image_channel_elements; 177 | 178 | const int channel_elements = crop_height * crop_width; 179 | const int crop_elements = depth * channel_elements; 180 | 181 | // init output space 182 | THFloatTensor_zero(grads_image); 183 | 184 | // data pointer 185 | const float * grads_data = THFloatTensor_data(grads); 186 | const float * boxes_data = THFloatTensor_data(boxes); 187 | const int * box_index_data = THIntTensor_data(box_index); 188 | float * grads_image_data = THFloatTensor_data(grads_image); 189 | 190 | for (int b = 0; b < num_boxes; ++b) { 191 | const float * box = boxes_data + b * 4; 192 | const float y1 = box[0]; 193 | const float x1 = box[1]; 194 | const float y2 = box[2]; 195 | const float x2 = box[3]; 196 | 197 | const int b_in = box_index_data[b]; 198 | if (b_in < 0 || b_in >= batch_size) { 199 | printf("Error: batch_index %d out of range [0, %d)\n", b_in, batch_size); 200 | exit(-1); 201 | } 202 | 203 | const float height_scale = 204 | (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1) 205 | : 0; 206 | const float width_scale = 207 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) 208 | : 0; 209 | 210 | for (int y = 0; y < crop_height; ++y) 211 | { 212 | const float in_y = (crop_height > 1) 213 | ? y1 * (image_height - 1) + y * height_scale 214 | : 0.5 * (y1 + y2) * (image_height - 1); 215 | if (in_y < 0 || in_y > image_height - 1) 216 | { 217 | continue; 218 | } 219 | const int top_y_index = floorf(in_y); 220 | const int bottom_y_index = ceilf(in_y); 221 | const float y_lerp = in_y - top_y_index; 222 | 223 | for (int x = 0; x < crop_width; ++x) 224 | { 225 | const float in_x = (crop_width > 1) 226 | ? x1 * (image_width - 1) + x * width_scale 227 | : 0.5 * (x1 + x2) * (image_width - 1); 228 | if (in_x < 0 || in_x > image_width - 1) 229 | { 230 | continue; 231 | } 232 | const int left_x_index = floorf(in_x); 233 | const int right_x_index = ceilf(in_x); 234 | const float x_lerp = in_x - left_x_index; 235 | 236 | for (int d = 0; d < depth; ++d) 237 | { 238 | float *pimage = grads_image_data + b_in * image_elements + d * image_channel_elements; 239 | const float grad_val = grads_data[crop_elements * b + channel_elements * d + y * crop_width + x]; 240 | 241 | const float dtop = (1 - y_lerp) * grad_val; 242 | pimage[top_y_index * image_width + left_x_index] += (1 - x_lerp) * dtop; 243 | pimage[top_y_index * image_width + right_x_index] += x_lerp * dtop; 244 | 245 | const float dbottom = y_lerp * grad_val; 246 | pimage[bottom_y_index * image_width + left_x_index] += (1 - x_lerp) * dbottom; 247 | pimage[bottom_y_index * image_width + right_x_index] += x_lerp * dbottom; 248 | } // end d 249 | } // end x 250 | } // end y 251 | } // end b 252 | } -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/src/crop_and_resize.h: -------------------------------------------------------------------------------- 1 | void crop_and_resize_forward( 2 | THFloatTensor * image, 3 | THFloatTensor * boxes, // [y1, x1, y2, x2] 4 | THIntTensor * box_index, // range in [0, batch_size) 5 | const float extrapolation_value, 6 | const int crop_height, 7 | const int crop_width, 8 | THFloatTensor * crops 9 | ); 10 | 11 | void crop_and_resize_backward( 12 | THFloatTensor * grads, 13 | THFloatTensor * boxes, // [y1, x1, y2, x2] 14 | THIntTensor * box_index, // range in [0, batch_size) 15 | THFloatTensor * grads_image // resize to [bsize, c, hc, wc] 16 | ); -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/src/crop_and_resize_gpu.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include "cuda/crop_and_resize_kernel.h" 3 | 4 | extern THCState *state; 5 | 6 | 7 | void crop_and_resize_gpu_forward( 8 | THCudaTensor * image, 9 | THCudaTensor * boxes, // [y1, x1, y2, x2] 10 | THCudaIntTensor * box_index, // range in [0, batch_size) 11 | const float extrapolation_value, 12 | const int crop_height, 13 | const int crop_width, 14 | THCudaTensor * crops 15 | ) { 16 | const int batch_size = THCudaTensor_size(state, image, 0); 17 | const int depth = THCudaTensor_size(state, image, 1); 18 | const int image_height = THCudaTensor_size(state, image, 2); 19 | const int image_width = THCudaTensor_size(state, image, 3); 20 | 21 | const int num_boxes = THCudaTensor_size(state, boxes, 0); 22 | 23 | // init output space 24 | THCudaTensor_resize4d(state, crops, num_boxes, depth, crop_height, crop_width); 25 | THCudaTensor_zero(state, crops); 26 | 27 | cudaStream_t stream = THCState_getCurrentStream(state); 28 | CropAndResizeLaucher( 29 | THCudaTensor_data(state, image), 30 | THCudaTensor_data(state, boxes), 31 | THCudaIntTensor_data(state, box_index), 32 | num_boxes, batch_size, image_height, image_width, 33 | crop_height, crop_width, depth, extrapolation_value, 34 | THCudaTensor_data(state, crops), 35 | stream 36 | ); 37 | } 38 | 39 | 40 | void crop_and_resize_gpu_backward( 41 | THCudaTensor * grads, 42 | THCudaTensor * boxes, // [y1, x1, y2, x2] 43 | THCudaIntTensor * box_index, // range in [0, batch_size) 44 | THCudaTensor * grads_image // resize to [bsize, c, hc, wc] 45 | ) { 46 | // shape 47 | const int batch_size = THCudaTensor_size(state, grads_image, 0); 48 | const int depth = THCudaTensor_size(state, grads_image, 1); 49 | const int image_height = THCudaTensor_size(state, grads_image, 2); 50 | const int image_width = THCudaTensor_size(state, grads_image, 3); 51 | 52 | const int num_boxes = THCudaTensor_size(state, grads, 0); 53 | const int crop_height = THCudaTensor_size(state, grads, 2); 54 | const int crop_width = THCudaTensor_size(state, grads, 3); 55 | 56 | // init output space 57 | THCudaTensor_zero(state, grads_image); 58 | 59 | cudaStream_t stream = THCState_getCurrentStream(state); 60 | CropAndResizeBackpropImageLaucher( 61 | THCudaTensor_data(state, grads), 62 | THCudaTensor_data(state, boxes), 63 | THCudaIntTensor_data(state, box_index), 64 | num_boxes, batch_size, image_height, image_width, 65 | crop_height, crop_width, depth, 66 | THCudaTensor_data(state, grads_image), 67 | stream 68 | ); 69 | } -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/src/crop_and_resize_gpu.h: -------------------------------------------------------------------------------- 1 | void crop_and_resize_gpu_forward( 2 | THCudaTensor * image, 3 | THCudaTensor * boxes, // [y1, x1, y2, x2] 4 | THCudaIntTensor * box_index, // range in [0, batch_size) 5 | const float extrapolation_value, 6 | const int crop_height, 7 | const int crop_width, 8 | THCudaTensor * crops 9 | ); 10 | 11 | void crop_and_resize_gpu_backward( 12 | THCudaTensor * grads, 13 | THCudaTensor * boxes, // [y1, x1, y2, x2] 14 | THCudaIntTensor * box_index, // range in [0, batch_size) 15 | THCudaTensor * grads_image // resize to [bsize, c, hc, wc] 16 | ); -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/src/cuda/crop_and_resize_kernel.cu: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include "crop_and_resize_kernel.h" 4 | 5 | #define CUDA_1D_KERNEL_LOOP(i, n) \ 6 | for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \ 7 | i += blockDim.x * gridDim.x) 8 | 9 | 10 | __global__ 11 | void CropAndResizeKernel( 12 | const int nthreads, const float *image_ptr, const float *boxes_ptr, 13 | const int *box_ind_ptr, int num_boxes, int batch, int image_height, 14 | int image_width, int crop_height, int crop_width, int depth, 15 | float extrapolation_value, float *crops_ptr) 16 | { 17 | CUDA_1D_KERNEL_LOOP(out_idx, nthreads) 18 | { 19 | // NHWC: out_idx = d + depth * (w + crop_width * (h + crop_height * b)) 20 | // NCHW: out_idx = w + crop_width * (h + crop_height * (d + depth * b)) 21 | int idx = out_idx; 22 | const int x = idx % crop_width; 23 | idx /= crop_width; 24 | const int y = idx % crop_height; 25 | idx /= crop_height; 26 | const int d = idx % depth; 27 | const int b = idx / depth; 28 | 29 | const float y1 = boxes_ptr[b * 4]; 30 | const float x1 = boxes_ptr[b * 4 + 1]; 31 | const float y2 = boxes_ptr[b * 4 + 2]; 32 | const float x2 = boxes_ptr[b * 4 + 3]; 33 | 34 | const int b_in = box_ind_ptr[b]; 35 | if (b_in < 0 || b_in >= batch) 36 | { 37 | continue; 38 | } 39 | 40 | const float height_scale = 41 | (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1) 42 | : 0; 43 | const float width_scale = 44 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) : 0; 45 | 46 | const float in_y = (crop_height > 1) 47 | ? y1 * (image_height - 1) + y * height_scale 48 | : 0.5 * (y1 + y2) * (image_height - 1); 49 | if (in_y < 0 || in_y > image_height - 1) 50 | { 51 | crops_ptr[out_idx] = extrapolation_value; 52 | continue; 53 | } 54 | 55 | const float in_x = (crop_width > 1) 56 | ? x1 * (image_width - 1) + x * width_scale 57 | : 0.5 * (x1 + x2) * (image_width - 1); 58 | if (in_x < 0 || in_x > image_width - 1) 59 | { 60 | crops_ptr[out_idx] = extrapolation_value; 61 | continue; 62 | } 63 | 64 | const int top_y_index = floorf(in_y); 65 | const int bottom_y_index = ceilf(in_y); 66 | const float y_lerp = in_y - top_y_index; 67 | 68 | const int left_x_index = floorf(in_x); 69 | const int right_x_index = ceilf(in_x); 70 | const float x_lerp = in_x - left_x_index; 71 | 72 | const float *pimage = image_ptr + (b_in * depth + d) * image_height * image_width; 73 | const float top_left = pimage[top_y_index * image_width + left_x_index]; 74 | const float top_right = pimage[top_y_index * image_width + right_x_index]; 75 | const float bottom_left = pimage[bottom_y_index * image_width + left_x_index]; 76 | const float bottom_right = pimage[bottom_y_index * image_width + right_x_index]; 77 | 78 | const float top = top_left + (top_right - top_left) * x_lerp; 79 | const float bottom = bottom_left + (bottom_right - bottom_left) * x_lerp; 80 | crops_ptr[out_idx] = top + (bottom - top) * y_lerp; 81 | } 82 | } 83 | 84 | __global__ 85 | void CropAndResizeBackpropImageKernel( 86 | const int nthreads, const float *grads_ptr, const float *boxes_ptr, 87 | const int *box_ind_ptr, int num_boxes, int batch, int image_height, 88 | int image_width, int crop_height, int crop_width, int depth, 89 | float *grads_image_ptr) 90 | { 91 | CUDA_1D_KERNEL_LOOP(out_idx, nthreads) 92 | { 93 | // NHWC: out_idx = d + depth * (w + crop_width * (h + crop_height * b)) 94 | // NCHW: out_idx = w + crop_width * (h + crop_height * (d + depth * b)) 95 | int idx = out_idx; 96 | const int x = idx % crop_width; 97 | idx /= crop_width; 98 | const int y = idx % crop_height; 99 | idx /= crop_height; 100 | const int d = idx % depth; 101 | const int b = idx / depth; 102 | 103 | const float y1 = boxes_ptr[b * 4]; 104 | const float x1 = boxes_ptr[b * 4 + 1]; 105 | const float y2 = boxes_ptr[b * 4 + 2]; 106 | const float x2 = boxes_ptr[b * 4 + 3]; 107 | 108 | const int b_in = box_ind_ptr[b]; 109 | if (b_in < 0 || b_in >= batch) 110 | { 111 | continue; 112 | } 113 | 114 | const float height_scale = 115 | (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1) 116 | : 0; 117 | const float width_scale = 118 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) : 0; 119 | 120 | const float in_y = (crop_height > 1) 121 | ? y1 * (image_height - 1) + y * height_scale 122 | : 0.5 * (y1 + y2) * (image_height - 1); 123 | if (in_y < 0 || in_y > image_height - 1) 124 | { 125 | continue; 126 | } 127 | 128 | const float in_x = (crop_width > 1) 129 | ? x1 * (image_width - 1) + x * width_scale 130 | : 0.5 * (x1 + x2) * (image_width - 1); 131 | if (in_x < 0 || in_x > image_width - 1) 132 | { 133 | continue; 134 | } 135 | 136 | const int top_y_index = floorf(in_y); 137 | const int bottom_y_index = ceilf(in_y); 138 | const float y_lerp = in_y - top_y_index; 139 | 140 | const int left_x_index = floorf(in_x); 141 | const int right_x_index = ceilf(in_x); 142 | const float x_lerp = in_x - left_x_index; 143 | 144 | float *pimage = grads_image_ptr + (b_in * depth + d) * image_height * image_width; 145 | const float dtop = (1 - y_lerp) * grads_ptr[out_idx]; 146 | atomicAdd( 147 | pimage + top_y_index * image_width + left_x_index, 148 | (1 - x_lerp) * dtop 149 | ); 150 | atomicAdd( 151 | pimage + top_y_index * image_width + right_x_index, 152 | x_lerp * dtop 153 | ); 154 | 155 | const float dbottom = y_lerp * grads_ptr[out_idx]; 156 | atomicAdd( 157 | pimage + bottom_y_index * image_width + left_x_index, 158 | (1 - x_lerp) * dbottom 159 | ); 160 | atomicAdd( 161 | pimage + bottom_y_index * image_width + right_x_index, 162 | x_lerp * dbottom 163 | ); 164 | } 165 | } 166 | 167 | 168 | void CropAndResizeLaucher( 169 | const float *image_ptr, const float *boxes_ptr, 170 | const int *box_ind_ptr, int num_boxes, int batch, int image_height, 171 | int image_width, int crop_height, int crop_width, int depth, 172 | float extrapolation_value, float *crops_ptr, cudaStream_t stream) 173 | { 174 | const int total_count = num_boxes * crop_height * crop_width * depth; 175 | const int thread_per_block = 1024; 176 | const int block_count = (total_count + thread_per_block - 1) / thread_per_block; 177 | cudaError_t err; 178 | 179 | if (total_count > 0) 180 | { 181 | CropAndResizeKernel<<>>( 182 | total_count, image_ptr, boxes_ptr, 183 | box_ind_ptr, num_boxes, batch, image_height, image_width, 184 | crop_height, crop_width, depth, extrapolation_value, crops_ptr); 185 | 186 | err = cudaGetLastError(); 187 | if (cudaSuccess != err) 188 | { 189 | fprintf(stderr, "cudaCheckError() failed : %s\n", cudaGetErrorString(err)); 190 | exit(-1); 191 | } 192 | } 193 | } 194 | 195 | 196 | void CropAndResizeBackpropImageLaucher( 197 | const float *grads_ptr, const float *boxes_ptr, 198 | const int *box_ind_ptr, int num_boxes, int batch, int image_height, 199 | int image_width, int crop_height, int crop_width, int depth, 200 | float *grads_image_ptr, cudaStream_t stream) 201 | { 202 | const int total_count = num_boxes * crop_height * crop_width * depth; 203 | const int thread_per_block = 1024; 204 | const int block_count = (total_count + thread_per_block - 1) / thread_per_block; 205 | cudaError_t err; 206 | 207 | if (total_count > 0) 208 | { 209 | CropAndResizeBackpropImageKernel<<>>( 210 | total_count, grads_ptr, boxes_ptr, 211 | box_ind_ptr, num_boxes, batch, image_height, image_width, 212 | crop_height, crop_width, depth, grads_image_ptr); 213 | 214 | err = cudaGetLastError(); 215 | if (cudaSuccess != err) 216 | { 217 | fprintf(stderr, "cudaCheckError() failed : %s\n", cudaGetErrorString(err)); 218 | exit(-1); 219 | } 220 | } 221 | } -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/roialign/roi_align/src/cuda/crop_and_resize_kernel.h: -------------------------------------------------------------------------------- 1 | #ifndef _CropAndResize_Kernel 2 | #define _CropAndResize_Kernel 3 | 4 | #ifdef __cplusplus 5 | extern "C" { 6 | #endif 7 | 8 | void CropAndResizeLaucher( 9 | const float *image_ptr, const float *boxes_ptr, 10 | const int *box_ind_ptr, int num_boxes, int batch, int image_height, 11 | int image_width, int crop_height, int crop_width, int depth, 12 | float extrapolation_value, float *crops_ptr, cudaStream_t stream); 13 | 14 | void CropAndResizeBackpropImageLaucher( 15 | const float *grads_ptr, const float *boxes_ptr, 16 | const int *box_ind_ptr, int num_boxes, int batch, int image_height, 17 | int image_width, int crop_height, int crop_width, int depth, 18 | float *grads_image_ptr, cudaStream_t stream); 19 | 20 | #ifdef __cplusplus 21 | } 22 | #endif 23 | 24 | #endif -------------------------------------------------------------------------------- /openmined/pytorch-mask-rcnn-master/utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Mask R-CNN 3 | Common utility functions and classes. 4 | 5 | Copyright (c) 2017 Matterport, Inc. 6 | Licensed under the MIT License (see LICENSE for details) 7 | Written by Waleed Abdulla 8 | """ 9 | 10 | import sys 11 | import os 12 | import math 13 | import random 14 | import numpy as np 15 | import scipy.misc 16 | import scipy.ndimage 17 | import skimage.color 18 | import skimage.io 19 | import torch 20 | 21 | ############################################################ 22 | # Bounding Boxes 23 | ############################################################ 24 | 25 | def extract_bboxes(mask): 26 | """Compute bounding boxes from masks. 27 | mask: [height, width, num_instances]. Mask pixels are either 1 or 0. 28 | 29 | Returns: bbox array [num_instances, (y1, x1, y2, x2)]. 30 | """ 31 | boxes = np.zeros([mask.shape[-1], 4], dtype=np.int32) 32 | for i in range(mask.shape[-1]): 33 | m = mask[:, :, i] 34 | # Bounding box. 35 | horizontal_indicies = np.where(np.any(m, axis=0))[0] 36 | vertical_indicies = np.where(np.any(m, axis=1))[0] 37 | if horizontal_indicies.shape[0]: 38 | x1, x2 = horizontal_indicies[[0, -1]] 39 | y1, y2 = vertical_indicies[[0, -1]] 40 | # x2 and y2 should not be part of the box. Increment by 1. 41 | x2 += 1 42 | y2 += 1 43 | else: 44 | # No mask for this instance. Might happen due to 45 | # resizing or cropping. Set bbox to zeros 46 | x1, x2, y1, y2 = 0, 0, 0, 0 47 | boxes[i] = np.array([y1, x1, y2, x2]) 48 | return boxes.astype(np.int32) 49 | 50 | 51 | def compute_iou(box, boxes, box_area, boxes_area): 52 | """Calculates IoU of the given box with the array of the given boxes. 53 | box: 1D vector [y1, x1, y2, x2] 54 | boxes: [boxes_count, (y1, x1, y2, x2)] 55 | box_area: float. the area of 'box' 56 | boxes_area: array of length boxes_count. 57 | 58 | Note: the areas are passed in rather than calculated here for 59 | efficency. Calculate once in the caller to avoid duplicate work. 60 | """ 61 | # Calculate intersection areas 62 | y1 = np.maximum(box[0], boxes[:, 0]) 63 | y2 = np.minimum(box[2], boxes[:, 2]) 64 | x1 = np.maximum(box[1], boxes[:, 1]) 65 | x2 = np.minimum(box[3], boxes[:, 3]) 66 | intersection = np.maximum(x2 - x1, 0) * np.maximum(y2 - y1, 0) 67 | union = box_area + boxes_area[:] - intersection[:] 68 | iou = intersection / union 69 | return iou 70 | 71 | 72 | def compute_overlaps(boxes1, boxes2): 73 | """Computes IoU overlaps between two sets of boxes. 74 | boxes1, boxes2: [N, (y1, x1, y2, x2)]. 75 | 76 | For better performance, pass the largest set first and the smaller second. 77 | """ 78 | # Areas of anchors and GT boxes 79 | area1 = (boxes1[:, 2] - boxes1[:, 0]) * (boxes1[:, 3] - boxes1[:, 1]) 80 | area2 = (boxes2[:, 2] - boxes2[:, 0]) * (boxes2[:, 3] - boxes2[:, 1]) 81 | 82 | # Compute overlaps to generate matrix [boxes1 count, boxes2 count] 83 | # Each cell contains the IoU value. 84 | overlaps = np.zeros((boxes1.shape[0], boxes2.shape[0])) 85 | for i in range(overlaps.shape[1]): 86 | box2 = boxes2[i] 87 | overlaps[:, i] = compute_iou(box2, boxes1, area2[i], area1) 88 | return overlaps 89 | 90 | def box_refinement(box, gt_box): 91 | """Compute refinement needed to transform box to gt_box. 92 | box and gt_box are [N, (y1, x1, y2, x2)] 93 | """ 94 | 95 | height = box[:, 2] - box[:, 0] 96 | width = box[:, 3] - box[:, 1] 97 | center_y = box[:, 0] + 0.5 * height 98 | center_x = box[:, 1] + 0.5 * width 99 | 100 | gt_height = gt_box[:, 2] - gt_box[:, 0] 101 | gt_width = gt_box[:, 3] - gt_box[:, 1] 102 | gt_center_y = gt_box[:, 0] + 0.5 * gt_height 103 | gt_center_x = gt_box[:, 1] + 0.5 * gt_width 104 | 105 | dy = (gt_center_y - center_y) / height 106 | dx = (gt_center_x - center_x) / width 107 | dh = torch.log(gt_height / height) 108 | dw = torch.log(gt_width / width) 109 | 110 | result = torch.stack([dy, dx, dh, dw], dim=1) 111 | return result 112 | 113 | 114 | ############################################################ 115 | # Dataset 116 | ############################################################ 117 | 118 | class Dataset(object): 119 | """The base class for dataset classes. 120 | To use it, create a new class that adds functions specific to the dataset 121 | you want to use. For example: 122 | 123 | class CatsAndDogsDataset(Dataset): 124 | def load_cats_and_dogs(self): 125 | ... 126 | def load_mask(self, image_id): 127 | ... 128 | def image_reference(self, image_id): 129 | ... 130 | 131 | See COCODataset and ShapesDataset as examples. 132 | """ 133 | 134 | def __init__(self, class_map=None): 135 | self._image_ids = [] 136 | self.image_info = [] 137 | # Background is always the first class 138 | self.class_info = [{"source": "", "id": 0, "name": "BG"}] 139 | self.source_class_ids = {} 140 | 141 | def add_class(self, source, class_id, class_name): 142 | assert "." not in source, "Source name cannot contain a dot" 143 | # Does the class exist already? 144 | for info in self.class_info: 145 | if info['source'] == source and info["id"] == class_id: 146 | # source.class_id combination already available, skip 147 | return 148 | # Add the class 149 | self.class_info.append({ 150 | "source": source, 151 | "id": class_id, 152 | "name": class_name, 153 | }) 154 | 155 | def add_image(self, source, image_id, path, **kwargs): 156 | image_info = { 157 | "id": image_id, 158 | "source": source, 159 | "path": path, 160 | } 161 | image_info.update(kwargs) 162 | self.image_info.append(image_info) 163 | 164 | def image_reference(self, image_id): 165 | """Return a link to the image in its source Website or details about 166 | the image that help looking it up or debugging it. 167 | 168 | Override for your dataset, but pass to this function 169 | if you encounter images not in your dataset. 170 | """ 171 | return "" 172 | 173 | def prepare(self, class_map=None): 174 | """Prepares the Dataset class for use. 175 | 176 | TODO: class map is not supported yet. When done, it should handle mapping 177 | classes from different datasets to the same class ID. 178 | """ 179 | def clean_name(name): 180 | """Returns a shorter version of object names for cleaner display.""" 181 | return ",".join(name.split(",")[:1]) 182 | 183 | # Build (or rebuild) everything else from the info dicts. 184 | self.num_classes = len(self.class_info) 185 | self.class_ids = np.arange(self.num_classes) 186 | self.class_names = [clean_name(c["name"]) for c in self.class_info] 187 | self.num_images = len(self.image_info) 188 | self._image_ids = np.arange(self.num_images) 189 | 190 | self.class_from_source_map = {"{}.{}".format(info['source'], info['id']): id 191 | for info, id in zip(self.class_info, self.class_ids)} 192 | 193 | # Map sources to class_ids they support 194 | self.sources = list(set([i['source'] for i in self.class_info])) 195 | self.source_class_ids = {} 196 | # Loop over datasets 197 | for source in self.sources: 198 | self.source_class_ids[source] = [] 199 | # Find classes that belong to this dataset 200 | for i, info in enumerate(self.class_info): 201 | # Include BG class in all datasets 202 | if i == 0 or source == info['source']: 203 | self.source_class_ids[source].append(i) 204 | 205 | def map_source_class_id(self, source_class_id): 206 | """Takes a source class ID and returns the int class ID assigned to it. 207 | 208 | For example: 209 | dataset.map_source_class_id("coco.12") -> 23 210 | """ 211 | return self.class_from_source_map[source_class_id] 212 | 213 | def get_source_class_id(self, class_id, source): 214 | """Map an internal class ID to the corresponding class ID in the source dataset.""" 215 | info = self.class_info[class_id] 216 | assert info['source'] == source 217 | return info['id'] 218 | 219 | def append_data(self, class_info, image_info): 220 | self.external_to_class_id = {} 221 | for i, c in enumerate(self.class_info): 222 | for ds, id in c["map"]: 223 | self.external_to_class_id[ds + str(id)] = i 224 | 225 | # Map external image IDs to internal ones. 226 | self.external_to_image_id = {} 227 | for i, info in enumerate(self.image_info): 228 | self.external_to_image_id[info["ds"] + str(info["id"])] = i 229 | 230 | @property 231 | def image_ids(self): 232 | return self._image_ids 233 | 234 | def source_image_link(self, image_id): 235 | """Returns the path or URL to the image. 236 | Override this to return a URL to the image if it's availble online for easy 237 | debugging. 238 | """ 239 | return self.image_info[image_id]["path"] 240 | 241 | def load_image(self, image_id): 242 | """Load the specified image and return a [H,W,3] Numpy array. 243 | """ 244 | # Load image 245 | image = skimage.io.imread(self.image_info[image_id]['path']) 246 | # If grayscale. Convert to RGB for consistency. 247 | if image.ndim != 3: 248 | image = skimage.color.gray2rgb(image) 249 | return image 250 | 251 | def load_mask(self, image_id): 252 | """Load instance masks for the given image. 253 | 254 | Different datasets use different ways to store masks. Override this 255 | method to load instance masks and return them in the form of am 256 | array of binary masks of shape [height, width, instances]. 257 | 258 | Returns: 259 | masks: A bool array of shape [height, width, instance count] with 260 | a binary mask per instance. 261 | class_ids: a 1D array of class IDs of the instance masks. 262 | """ 263 | # Override this function to load a mask from your dataset. 264 | # Otherwise, it returns an empty mask. 265 | mask = np.empty([0, 0, 0]) 266 | class_ids = np.empty([0], np.int32) 267 | return mask, class_ids 268 | 269 | 270 | def resize_image(image, min_dim=None, max_dim=None, padding=False): 271 | """ 272 | Resizes an image keeping the aspect ratio. 273 | 274 | min_dim: if provided, resizes the image such that it's smaller 275 | dimension == min_dim 276 | max_dim: if provided, ensures that the image longest side doesn't 277 | exceed this value. 278 | padding: If true, pads image with zeros so it's size is max_dim x max_dim 279 | 280 | Returns: 281 | image: the resized image 282 | window: (y1, x1, y2, x2). If max_dim is provided, padding might 283 | be inserted in the returned image. If so, this window is the 284 | coordinates of the image part of the full image (excluding 285 | the padding). The x2, y2 pixels are not included. 286 | scale: The scale factor used to resize the image 287 | padding: Padding added to the image [(top, bottom), (left, right), (0, 0)] 288 | """ 289 | # Default window (y1, x1, y2, x2) and default scale == 1. 290 | h, w = image.shape[:2] 291 | window = (0, 0, h, w) 292 | scale = 1 293 | 294 | # Scale? 295 | if min_dim: 296 | # Scale up but not down 297 | scale = max(1, min_dim / min(h, w)) 298 | # Does it exceed max dim? 299 | if max_dim: 300 | image_max = max(h, w) 301 | if round(image_max * scale) > max_dim: 302 | scale = max_dim / image_max 303 | # Resize image and mask 304 | if scale != 1: 305 | image = scipy.misc.imresize( 306 | image, (round(h * scale), round(w * scale))) 307 | # Need padding? 308 | if padding: 309 | # Get new height and width 310 | h, w = image.shape[:2] 311 | top_pad = (max_dim - h) // 2 312 | bottom_pad = max_dim - h - top_pad 313 | left_pad = (max_dim - w) // 2 314 | right_pad = max_dim - w - left_pad 315 | padding = [(top_pad, bottom_pad), (left_pad, right_pad), (0, 0)] 316 | image = np.pad(image, padding, mode='constant', constant_values=0) 317 | window = (top_pad, left_pad, h + top_pad, w + left_pad) 318 | return image, window, scale, padding 319 | 320 | 321 | def resize_mask(mask, scale, padding): 322 | """Resizes a mask using the given scale and padding. 323 | Typically, you get the scale and padding from resize_image() to 324 | ensure both, the image and the mask, are resized consistently. 325 | 326 | scale: mask scaling factor 327 | padding: Padding to add to the mask in the form 328 | [(top, bottom), (left, right), (0, 0)] 329 | """ 330 | h, w = mask.shape[:2] 331 | mask = scipy.ndimage.zoom(mask, zoom=[scale, scale, 1], order=0) 332 | mask = np.pad(mask, padding, mode='constant', constant_values=0) 333 | return mask 334 | 335 | 336 | def minimize_mask(bbox, mask, mini_shape): 337 | """Resize masks to a smaller version to cut memory load. 338 | Mini-masks can then resized back to image scale using expand_masks() 339 | 340 | See inspect_data.ipynb notebook for more details. 341 | """ 342 | mini_mask = np.zeros(mini_shape + (mask.shape[-1],), dtype=bool) 343 | for i in range(mask.shape[-1]): 344 | m = mask[:, :, i] 345 | y1, x1, y2, x2 = bbox[i][:4] 346 | m = m[y1:y2, x1:x2] 347 | if m.size == 0: 348 | raise Exception("Invalid bounding box with area of zero") 349 | m = scipy.misc.imresize(m.astype(float), mini_shape, interp='bilinear') 350 | mini_mask[:, :, i] = np.where(m >= 128, 1, 0) 351 | return mini_mask 352 | 353 | 354 | def expand_mask(bbox, mini_mask, image_shape): 355 | """Resizes mini masks back to image size. Reverses the change 356 | of minimize_mask(). 357 | 358 | See inspect_data.ipynb notebook for more details. 359 | """ 360 | mask = np.zeros(image_shape[:2] + (mini_mask.shape[-1],), dtype=bool) 361 | for i in range(mask.shape[-1]): 362 | m = mini_mask[:, :, i] 363 | y1, x1, y2, x2 = bbox[i][:4] 364 | h = y2 - y1 365 | w = x2 - x1 366 | m = scipy.misc.imresize(m.astype(float), (h, w), interp='bilinear') 367 | mask[y1:y2, x1:x2, i] = np.where(m >= 128, 1, 0) 368 | return mask 369 | 370 | 371 | # TODO: Build and use this function to reduce code duplication 372 | def mold_mask(mask, config): 373 | pass 374 | 375 | 376 | def unmold_mask(mask, bbox, image_shape): 377 | """Converts a mask generated by the neural network into a format similar 378 | to it's original shape. 379 | mask: [height, width] of type float. A small, typically 28x28 mask. 380 | bbox: [y1, x1, y2, x2]. The box to fit the mask in. 381 | 382 | Returns a binary mask with the same size as the original image. 383 | """ 384 | threshold = 0.5 385 | y1, x1, y2, x2 = bbox 386 | mask = scipy.misc.imresize( 387 | mask, (y2 - y1, x2 - x1), interp='bilinear').astype(np.float32) / 255.0 388 | mask = np.where(mask >= threshold, 1, 0).astype(np.uint8) 389 | 390 | # Put the mask in the right location. 391 | full_mask = np.zeros(image_shape[:2], dtype=np.uint8) 392 | full_mask[y1:y2, x1:x2] = mask 393 | return full_mask 394 | 395 | 396 | ############################################################ 397 | # Anchors 398 | ############################################################ 399 | 400 | def generate_anchors(scales, ratios, shape, feature_stride, anchor_stride): 401 | """ 402 | scales: 1D array of anchor sizes in pixels. Example: [32, 64, 128] 403 | ratios: 1D array of anchor ratios of width/height. Example: [0.5, 1, 2] 404 | shape: [height, width] spatial shape of the feature map over which 405 | to generate anchors. 406 | feature_stride: Stride of the feature map relative to the image in pixels. 407 | anchor_stride: Stride of anchors on the feature map. For example, if the 408 | value is 2 then generate anchors for every other feature map pixel. 409 | """ 410 | # Get all combinations of scales and ratios 411 | scales, ratios = np.meshgrid(np.array(scales), np.array(ratios)) 412 | scales = scales.flatten() 413 | ratios = ratios.flatten() 414 | 415 | # Enumerate heights and widths from scales and ratios 416 | heights = scales / np.sqrt(ratios) 417 | widths = scales * np.sqrt(ratios) 418 | 419 | # Enumerate shifts in feature space 420 | shifts_y = np.arange(0, shape[0], anchor_stride) * feature_stride 421 | shifts_x = np.arange(0, shape[1], anchor_stride) * feature_stride 422 | shifts_x, shifts_y = np.meshgrid(shifts_x, shifts_y) 423 | 424 | # Enumerate combinations of shifts, widths, and heights 425 | box_widths, box_centers_x = np.meshgrid(widths, shifts_x) 426 | box_heights, box_centers_y = np.meshgrid(heights, shifts_y) 427 | 428 | # Reshape to get a list of (y, x) and a list of (h, w) 429 | box_centers = np.stack( 430 | [box_centers_y, box_centers_x], axis=2).reshape([-1, 2]) 431 | box_sizes = np.stack([box_heights, box_widths], axis=2).reshape([-1, 2]) 432 | 433 | # Convert to corner coordinates (y1, x1, y2, x2) 434 | boxes = np.concatenate([box_centers - 0.5 * box_sizes, 435 | box_centers + 0.5 * box_sizes], axis=1) 436 | return boxes 437 | 438 | 439 | def generate_pyramid_anchors(scales, ratios, feature_shapes, feature_strides, 440 | anchor_stride): 441 | """Generate anchors at different levels of a feature pyramid. Each scale 442 | is associated with a level of the pyramid, but each ratio is used in 443 | all levels of the pyramid. 444 | 445 | Returns: 446 | anchors: [N, (y1, x1, y2, x2)]. All generated anchors in one array. Sorted 447 | with the same order of the given scales. So, anchors of scale[0] come 448 | first, then anchors of scale[1], and so on. 449 | """ 450 | # Anchors 451 | # [anchor_count, (y1, x1, y2, x2)] 452 | anchors = [] 453 | for i in range(len(scales)): 454 | anchors.append(generate_anchors(scales[i], ratios, feature_shapes[i], 455 | feature_strides[i], anchor_stride)) 456 | return np.concatenate(anchors, axis=0) 457 | 458 | 459 | 460 | 461 | 462 | 463 | -------------------------------------------------------------------------------- /wallet.json: -------------------------------------------------------------------------------- 1 | { 2 | "privateKey": "0x0cf1de73eac7da286752cc6d9db07e7c803f5c239b31be6d5a89017fe4cb3a81", 3 | "publicKey": "0xaf718e326ddf284c17e4beb04af0de608ddd7920591aa089be361a2802c35dc6af26f20469758fab95f5d49c3554aaa4b22f85f38b005d22f51119d76a371a4b", 4 | "address": "0x9Bc813c9D1CE783a37841cA7751359b35a9AD763" 5 | } -------------------------------------------------------------------------------- /whitepaper/whitepaper.md: -------------------------------------------------------------------------------- 1 | # Synergy between OpenMined and iExec 2 | ## Decentralized AI Whitepaper for Siraj's Decentralized Apps 3 | 4 | Whitepaper giving an overview of the project as of June 2018. 5 | 6 | WARNING : *this a student project and has to be taken as his, without any warranty of any kind. Use at you own risk.* This is our final project for [TheSchool.AI](https://www.theschool.ai), a decentralized application course by Siraj Raval. This Whitepaper is inspired by the one from iExec and the OpenMined project. We talk a lot of iExec & OpenMined because it is the core technology we use for our project, but we have no link or affiliation of any kind with iExec. 7 | 8 | ## Authors 9 | Benoit Courty, Matthew McAteer, Alexandre Moreau and Jeddi Mees. 10 | 11 | ## Introduction 12 | 13 | Artificial Intelligence (AI) is an umbrella term for systems that can learn and not only execute already writen behaviour. In more recent years this has come to include the definitions of Machine Learning and by extension Deep Learning, techniques in computer science that allow programs to make inferences and predictions based on examples of input data. AI has been a boon to organizations with large silos of data at their disposal, but it has also raised some concerns. Some of these include but are not limited to user privacy, ownership of data, negative externalities of AI models optimizing for a narrowly-defined cost function, and latency and vulnerability of such centralized services. We propose a dual, general purpose paradigm for using decentralized artificial intelligence, that combines the best of two projects in the field: iExec and OpenMined. 14 | 15 | ![MASK_R-CNN](../img/20180604_143926.png) 16 | 17 | ## BLUEPRINT FOR DECENTRALIZED ARTIFICIAL INTELLIGENCE 18 | 19 | Big tech companies are leading AI these days. They use AI in more and more functionality in every application. For example Photoshop now use deep learning to separate subject from background. 20 | 21 | Things like that is really difficult for open source software because even with engineer willing to add such functionality they don't have the Data. Data is the currency of this century. And we give them for free to these company to allow them to sell us product in return ! 22 | 23 | That is a non-sense. We all deserve to keep our data safe and be able to monetize or give them to who we want. 24 | The leading technology for AI is Deep Learning, the concept is simple : just send millions of data with associated label (example : a picture of a cat with label "cat") to a computer program that will learn by itself to identify a cat when he saw one. 25 | 26 | So there is three challenges : 27 | - Having millions of data with label 28 | - Having the computing power to train these algorithms 29 | - Democratize the use of AI 30 | 31 | Our contribution with this project is to address these points. Allowing anyone to compute Deep Learning algorithms with less knowledge as possible. 32 | 33 | 34 | ## Technologies 35 | 36 | Our project is build around iExec, the Inter-Planetary FileSystem (IPFS), and OpenMined. 37 | 38 | IPFS is a distributed data storage system. Rather than having content addressed by servers like with HTTP, IPFS creates unique addresses for content itself that is copied redundantly over multiple nodes. It's functionality has already been demonstrated multiple times in the real world, and multiple projects exist to build on top of it (such as FileCoin developing ways to incentivize people to host these nodes, and thereby create a more stable network for people to keep their files on). 39 | 40 | The iExec project goal is to allow off-chain computation in a decentralized fashion. People can share their spare computing capacity for a given machine learning task, and they can be rewarded with RLC tokens. In the future, the project aims to sell unique datasets on this network (NOTE: iExec just recently released its testnet last week, so it is more challenging to use it. For example there is only company worker to execute task so when they are stopped we have no way to work on our project). 41 | 42 | If iExec offers the marketplace, incentivization, and computation management system for distributed AI, the OpenMined adds the computing paradigm for user and data anonymization. OpenMined combines machine learning with homomorphic encyption (encyrption that still allows for computations to be run on the encrypted data), and federated machine learning (improvement of the model by learning on the user data on his device). 43 | 44 | Combined, the result is a fully distributed information storage, processing, and buying/selling system. 45 | 46 | ## MARKET OPPORTUNITY 47 | 48 | Decentralisation is essential if we wan't to get our privacy and liberty back. 49 | OpenSource was a first step. Blockchain like Ethereum took it to the next level by allowing to incentive peoples. 50 | But current blockchain, or best call it distributed ledger technology (DLT), coulnd't handle heavy computing task. 51 | We are at the beginning of a new erae with project like [Open Mined](https://www.openmined.org/) and iExec for example. 52 | These project aim at computing challenging task. With a great addition to privacy for Open Mined. 53 | 54 | Privacy is no more a cypherpunk concept, it is a mainstream subject with GDPR and Facebook numerous leaks. 55 | More and more poeple saw their data as valuable assets and meaningful. 56 | 57 | ### The Blockchain market 58 | 59 | The report "Blockchain Market by Provider, Application (Payments, Exchanges, Smart Contracts, Documentation, Digital Identity, Supply Chain Management, and GRC Management), Organization Size, Industry Vertical, and Region - Global Forecast to 2022", The blockchain market size is expected to grow from USD 411.5 Million in 2017 to USD 7,683.7 Million by 2022, at a Compound Annual Growth Rate (CAGR) of 79.6%. The key factors including reduced total cost of ownership, faster transactions, simplified business process with transparency and immutability, and rising cryptocurrencies market cap and ICO are expected to drive the overall growth of the market. 60 | Blockchain is more challenging, most poeple sees it only like a crypto-currency and not like an other way to do computing while there’s a tons of other use cases: identity, notary, digital assets, smart contracts, digital voting, distributed storage, AI computing, etc. 61 | 62 | ### The dapps market 63 | DAPPs means Decentralized Applications. That’s a new kind of applications. These types of applications are not owned by anyone, can’t be shut down, and cannot have downtime. A DAPP should meet these criteria: Open Source, Decentralized, Incentive (digital assets for feeling itself). There’s DAPPS built on top of the two biggest blockchain platforms Bitcoin and Ethereal. There’s also some DAPPs built on their own blockchain. 64 | 65 | New DAPPs are built every day, as you can see on https://www.stateofthedapps.com, listing 1 576 DAPPs on his explorer, or even on https://dappradar.com 66 | 67 | Everything can be decentralized. We believe that in the future, all kind of applications will be decentralized, even the bigger ones. 68 | 69 | One of the current issue is that dapps are not necessarily user friendly and it’s pretty hard to be mass market. Another issue, is scalability. Ethereum’s scalability issues were recently emphasized by the popular cat-collecting virtual game CryptoKitties (DAPP game). The viral game caused the network, that can only handle 10 transactions per second to become clogged, and transaction fees skyrocketed. 70 | 71 | ### The traditional cloud market 72 | 73 | Many company transfert their infrastructures to the cloud. It's a huge and continuously growing market of $140 billions in 2018 only. But privacy and at least knowing where datas goes in the cloud is a growing concern too. 74 | 75 | ### The edge and fog computing market 76 | 77 | A new approach began to emmerge with "Fog Computing". In this vision, the edge devices carry out all the computation and storage they can handle. Only the important states are stored online. This combine well with decentralization and blockchain. 78 | 79 | ## TECHNOLOGY OVERVIEW 80 | ### Background 81 | 82 | Computing on blockchain is really limited to few instructions. And it will probably remain like that. 83 | 84 | But there is the need for heavy computation like AI, video encoding, 3D rendering to be done that a device like a smartphone r a laptop could not handle. 85 | 86 | This is what we will tackle in our project. 87 | 88 | ### Our stack 89 | 90 | We have two main part : 91 | - The training part using Open Mined. 92 | - The prediction part. With the front, who is the user interface and the back, which do the computation. 93 | 94 | ### Training part : Open Mined 95 | 96 | This part is not finish yet. We have only made a proof of concept to be sure it is the best solution for our need. 97 | 98 | You can find it in the [openmined directory](https://github.com/trancept/decentralized_AI/tree/master/openmined). 99 | 100 | What it does is that it trains a pyTorch model using the Open Mined distributed grid computing. 101 | 102 | ![Open Mined](https://cdn-images-1.medium.com/max/1600/1*GK9cCoOLSii191cjAlwsww.png) 103 | 104 | After training a model, we transfert it to the second part of the project to be use by end-users. 105 | 106 | ### Prediction Part : iExec 107 | 108 | The prediction part use a Docker image that is executed on iExec Network when a user request an AI task. 109 | 110 | #### iExec infra-structure 111 | 112 | iExec offerx a cloud computing marketplace to allow cloud computing as a commodity. It means that one can easily by computing ressources. 113 | 114 | ![iExec](https://github.com/trancept/decentralized_AI/blob/master/img/architecture_2.png) 115 | 116 | - A computing ressource is called a "worker". 117 | - Workers are grouped together in a "workerpool". This worker pool could be a former cloud provider who want to get money for his unsued computing power. Or individuals who want to get a little bit of money from their home computer. 118 | - The iExec marketplace is the place where workers sell their power to buyers. Like an open marketplace or an exchange. It's a pay-per-task (ppt) system. A bit like cloud API provider. 119 | - The DApp store is where you can find packaged application to run on iExec network. For or project that's where we put our semantic segmentation DApp. 120 | - The data marketplace is where you could sell or buy data (available in a future release). 121 | 122 | #### Architecture 123 | Our project is build above iExec to offer an easy way to ask for a machine learning task. 124 | 125 | Here is the architecture : 126 | 127 | ![project_blueprint](https://github.com/trancept/decentralized_AI/blob/master/img/architecture_1.png) 128 | 129 | #### The Back-end 130 | 131 | We build a Docker image with Keras, Tensorflow, Python 3 and matplotlib in headless mode to render the result to a file. 132 | 133 | We add the RCNN (regional convolutional neural network) weight file trained on [COCO dataset](http://cocodataset.org/). 134 | 135 | We made a Python script based on the demo Jupyter Notebook from Matterport for [Mask RCNN](https://github.com/matterport/Mask_RCNN). We add the Docker image to [DockerHub](https://hub.docker.com/). 136 | 137 | We made an iExec DApp (decentralized application) using the just released [iExec SDK V2](https://github.com/iExecBlockchainComputing/iexec-sdk). 138 | 139 | We deploy it to the [iExec marketplace](https://market.iex.ec/). 140 | 141 | So we now have a DApp ready to be called by any Ethereum smart contract. The contract call the DApp with the image URL to process. 142 | 143 | When the processing is finish a callback function is called so the contrat could continue his process. 144 | 145 | The computation of the image is done off-chain and act as an [Oracle](https://medium.com/bethereum/how-oracles-connect-smart-contracts-to-the-real-world-a56d3ed6a507). 146 | 147 | #### The Front-end 148 | 149 | On the left you upload your image. On the right you saw the workers and the associated cost. The five category represent different power. They are green if available. 150 | 151 | ![front_screenshot](https://github.com/trancept/decentralized_AI/blob/master/img/front_preview.png) 152 | 153 | We use NodeJS, Vue.JS, [Vuetify](https://vuetifyjs.com/en/), ETHjs, the [iExec front SDK](https://github.com/iExecBlockchainComputing/iexec-server-js-client), and IPFS-api. 154 | 155 | We use IPFS to allow user to upload an image to IPFS. But it is not mandatory, a user could also copy-paste an url from Internet. 156 | 157 | Then the user pays the processing in RLC currency and the Gas in ETH with Metamask. That's a current drawback of our solution as the user need to have ETH, buy RLC and transfert RLC from his wallet to the iExec "account". Many action before being able to use the DApp. 158 | 159 | The user also could check all of his transaction and get the result : 160 | ![Front status](https://github.com/trancept/decentralized_AI/blob/master/img/Screenshot_from_2018-06-09_14-55-45.png) 161 | 162 | 163 | #### Note on Proof-of-contribution 164 | Proof of Contibution (PoCo) is the way iExec ensure that a worker do not cheat when we pay him for a work. A worker must make a deposit and if they cheat, they loose the deposit. This is a core functionnality for iExec. It means that task for the distributed computation must be probabilistic to be able to check against cheating. 165 | 166 | 167 | ## ROADMAP 168 | ### Overview 169 | 170 | _Phase 1_: Creation of MVP (June, 2018) 171 | _Phase 2_: Debugging on Testnets (Q2 2018 - Q4 2018) 172 | _Phase 3_: Release and further real-world testing (starting Q4 2018 - Q1 2019) 173 | 174 | ### Financial Considerations & Budget 175 | 176 | This project is currently without cost and has no assigned budget. Development is fueled by volunteers who contribute part-time, and the kind donors of computing power on the testnet. As such, we also do not plan to launch an ICO. Aside from the risks of an ICO (such as needing to spend disproportionate amounts of capital on marketing instead of engineering), we have enough faith in humanity for it to contribute towards helping us achieve our collective development goals. 177 | 178 | (Note: We have imposed a very minor transaction fee on transactions made using our Dapp to finance teambuilding excursions once a week.) 179 | 180 | ## TEAM 181 | 182 | We are a great team, perfectly able to achieve our goal. 183 | 184 | ### Benoît Courty 185 | ![Benoît](../img/ben-rd168.png) 186 | 187 | French technical project manager who works as a free-lancer in big company in energy, insurance , e-commerce and TV since 1999. He also co-founded an UAV start-up, Neo-Robotix, that unfortunately didn't find his market fit. 188 | 189 | He's deeply looking at blockchain technology since one year, beginning with trading, then mining and now with development of smart contract and decentralized computing with project like iExec. As you can see in his GitHub :https://github.com/trancept/ He's looking for new opportunities in blockchain and machine learning. 190 | 191 | ### Matthew McAteer 192 | ![Matthew](../img/Mattew-rd168.png) 193 | 194 | Matthew is a Developer at [Inkrypt](https://www.inkrypt.io/), a company working on using IPFS to create censorship-resistant journalism tools, and Data Scientist at HelloFriend, which works on consumer DApps for event organizing and social media. Matthew has also worked with Companies such as Google and Suspect Technologies, and is a Graduate of Brown University. 195 | 196 | ### Alexandre Moreau 197 | ![Alexandre](../img/Alex-rd.png) 198 | 199 | Alexandre graduated in 2016 as a Computer Science engineer specialized in Robotics from the Institut polytechnique de Bordeaux in France. He currently works as an Integration Engineer at Deepomatic, a start-up specialized in Deep-Learning and video recognition systems. Before that, he worked as a R&D Software Engineer at another startup specialized in 3D printing and completed three research internships in Bioinformatics, Computer Graphics and High Performance Computing. 200 | 201 | ### Jeddi Mees 202 | ![Jeddi](../img/Jeddi-rd168.png) 203 | 204 | Growth hacker from the Growth Tribe Academy in Amsterdam and technical dev from LeWagon. He worked for EdTech and FoodTech industry as a growth marketer. Since 2016, he's helping startups and corporates to find new ways to grow. 205 | He's passionate about blockchain technology and AI. He's looking for new opportunities into this space as a growth marketer/product marketer. User Acquisition, Conversion Optimization and Retention, that's his specialization https://www.linkedin.com/in/jeddi-mees-i-do-growth/ 206 | 207 | 208 | 209 | ## REFERENCES 210 | 211 | Below is a list of the papers and projects that have inspired this work: 212 | 213 | - [Vuetify](https://vuetifyjs.com/en/) 214 | - [Mask RCNN](https://arxiv.org/abs/1703.06870) 215 | - [iExec (whitepaper)](https://iex.ec/whitepaper/iExec-WPv3.0-English.pdf) 216 | - [OpenMined (site)](https://www.openmined.org/) 217 | --------------------------------------------------------------------------------