├── LICENSE ├── README.txt └── _config.yml /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2017 by Iowa State University. 2 | All rights reserved. 3 | 4 | Permission to use, copy, modify, and distribute this software and its 5 | documentation for non-profit use, without fee, and without written agreement is 6 | hereby granted, provided that the above copyright notice and the following 7 | two paragraphs appear in all copies of this software. 8 | 9 | IN NO EVENT SHALL IOWA STATE UNIVERSITY BE LIABLE TO ANY PARTY FOR 10 | DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT 11 | OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF IOWA STATE UNIVERSITY 12 | HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 13 | 14 | IOWA STATE UNIVERSITY SPECIFICALLY DISCLAIMS ANY WARRANTIES, 15 | INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY 16 | AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS 17 | ON AN "AS IS" BASIS, AND IOWA STATE UNIVERSITY HAS NO OBLIGATION TO 18 | PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. 19 | -------------------------------------------------------------------------------- /README.txt: -------------------------------------------------------------------------------- 1 | 2 | ################################################################################################## 3 | 4 | DATA ACCESS FORM: 5 | 6 | Please fill out this form that asks for your name, a valid email address and the name of the institution you are affiliated with, to gain access to the data and model weights: 7 | 8 | https://docs.google.com/forms/d/1FggY3gRfDUxH8D4hCH-OHwEIHqa2tSdFXwobvM27Qh4/edit 9 | 10 | Thank you for your interest. The download link will be sent to your email once the form is completed. 11 | 12 | Please note: The data available through this link are of RGB images, each of shape [64 X 64 X 3]. If a higher resolution dataset is required, please address your request to: 13 | 14 | soumiks@iastate.edu (while cc'ing: sghosal@media.mit.edu) 15 | 16 | Users are welcome to create "Issues" within this repository if they face any problems regarding execution and/or deployment of the trained model on their own data-sets or on the shared data: https://github.com/SCSLabISU/xPLNet/issues 17 | 18 | ################################################################################################## 19 | 20 | LABELLING INFORMATION for shared data: 21 | 22 | Please follow Figure 2 for class/label information for the soybean leaf image datset. Images within a particular folder fall under the class information associated with the folder name (number) as shown in Figure 2. For example, images in folder '0' correspond to images under the "Bacterial Blight" class and so on. 23 | 24 | ################################################################################################## 25 | 26 | CITATION: 27 | 28 | If you use this dataset and/or the methods proposed in our research please cite our PNAS paper available at: 29 | http://www.pnas.org/content/115/18/4613 30 | 31 | Bibtex: 32 | 33 | @article {Ghosal4613, 34 | author = {Ghosal, Sambuddha and Blystone, David and Singh, Asheesh K. and Ganapathysubramanian, Baskar and Singh, Arti and Sarkar, Soumik}, 35 | title = {An explainable deep machine vision framework for plant stress phenotyping}, 36 | volume = {115}, 37 | number = {18}, 38 | pages = {4613--4618}, 39 | year = {2018}, 40 | doi = {10.1073/pnas.1716999115}, 41 | publisher = {National Academy of Sciences}, 42 | issn = {0027-8424}, 43 | URL = {http://www.pnas.org/content/115/18/4613}, 44 | eprint = {http://www.pnas.org/content/115/18/4613.full.pdf}, 45 | journal = {Proceedings of the National Academy of Sciences} 46 | } 47 | 48 | ################################################################################################## 49 | 50 | THE FOLLOWING DESCRIBES THE MODEL ARCHITECTURE AND HYPER-PARAMETERS (use the seed # provided to reproduce the results from the paper): 51 | 52 | seed = 1337 (use numpy.random.seed(seed)) 53 | 54 | Arch 1: 55 | _________________________________________________________________ 56 | Layer (type) Output Shape Param # 57 | ================================================================= 58 | conv2d_1 (Conv2D) (None, 128, 62, 62) 3584 59 | _________________________________________________________________ 60 | batch_normalization_1 (Batch (None, 128, 62, 62) 248 61 | _________________________________________________________________ 62 | conv2d_2 (Conv2D) (None, 128, 60, 60) 147584 63 | _________________________________________________________________ 64 | max_pooling2d_1 (MaxPooling2 (None, 128, 30, 30) 0 65 | _________________________________________________________________ 66 | batch_normalization_2 (Batch (None, 128, 30, 30) 120 67 | _________________________________________________________________ 68 | conv2d_3 (Conv2D) (None, 128, 28, 28) 147584 69 | _________________________________________________________________ 70 | max_pooling2d_2 (MaxPooling2 (None, 128, 14, 14) 0 71 | _________________________________________________________________ 72 | batch_normalization_3 (Batch (None, 128, 14, 14) 56 73 | _________________________________________________________________ 74 | conv2d_4 (Conv2D) (None, 128, 12, 12) 147584 75 | _________________________________________________________________ 76 | max_pooling2d_3 (MaxPooling2 (None, 128, 6, 6) 0 77 | _________________________________________________________________ 78 | batch_normalization_4 (Batch (None, 128, 6, 6) 24 79 | _________________________________________________________________ 80 | conv2d_5 (Conv2D) (None, 128, 4, 4) 147584 81 | _________________________________________________________________ 82 | max_pooling2d_4 (MaxPooling2 (None, 128, 2, 2) 0 83 | _________________________________________________________________ 84 | flatten_1 (Flatten) (None, 512) 0 85 | _________________________________________________________________ 86 | dropout_1 (Dropout) (None, 512) 0 87 | _________________________________________________________________ 88 | dense_1 (Dense) (None, 500) 256500 89 | _________________________________________________________________ 90 | dropout_2 (Dropout) (None, 500) 0 91 | _________________________________________________________________ 92 | dense_2 (Dense) (None, 100) 50100 93 | _________________________________________________________________ 94 | dropout_3 (Dropout) (None, 100) 0 95 | _________________________________________________________________ 96 | dense_3 (Dense) (None, 9) 909 97 | ================================================================= 98 | Total params: 901,877 99 | Trainable params: 901,653 100 | Non-trainable params: 224 101 | _________________________________________________________________ 102 | 103 | OPTIMIZER USED: Adam (lr = 0.001; the default lr from "Adam: A Method for Stochastic Optimization" by Kingma et al.) 104 | BATCH_SIZE for Training: 60 105 | Epochs = 150 (90s per epoch) 106 | 107 | INPUT SHAPE: (3, 64, 64) [channels first] 108 | INPUT PRE-PROCESSING: Input to the network must be normalized by the maximum value of the input image channels before being fed to the trained network for online/offline classification. 109 | FRONTEND: Keras 110 | BACKEND: Theano 111 | 112 | GPU Used: NVIDIA GeForce GTX TITAN X (12207 MB of dedicated GPU memory) 113 | CUDA 8.0 (cuDNN 5.1) 114 | 115 | Training Samples = 59184 (validation split = 0.1) 116 | Test Samples = 6576 117 | 118 | Test Accuracy = 94.13% 119 | 120 | 121 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-time-machine --------------------------------------------------------------------------------