├── Main.ipynb ├── README.md ├── data ├── gt_mband │ ├── .gitattributes │ ├── 01.tif │ ├── 02.tif │ ├── 03.tif │ ├── 04.tif │ ├── 05.tif │ ├── 06.tif │ ├── 07.tif │ ├── 08.tif │ ├── 09.tif │ ├── 10.tif │ ├── 11.tif │ ├── 12.tif │ ├── 13.tif │ ├── 14.tif │ ├── 15.tif │ ├── 16.tif │ ├── 17.tif │ ├── 18.tif │ ├── 19.tif │ ├── 20.tif │ ├── 21.tif │ ├── 22.tif │ ├── 23.tif │ └── 24.tif └── mband │ ├── .gitattributes │ ├── 01.tif │ ├── 02.tif │ ├── 03.tif │ ├── 04.tif │ ├── 05.tif │ ├── 06.tif │ ├── 07.tif │ ├── 08.tif │ ├── 09.tif │ ├── 10.tif │ ├── 11.tif │ ├── 12.tif │ ├── 13.tif │ ├── 14.tif │ ├── 15.tif │ ├── 16.tif │ ├── 17.tif │ ├── 18.tif │ ├── 19.tif │ ├── 20.tif │ ├── 21.tif │ ├── 22.tif │ ├── 23.tif │ ├── 24.tif │ └── test.tif ├── imgs ├── 1.png ├── 2.png ├── 3.png ├── 4.png ├── 5.png ├── 6.png ├── 7.PNG └── 9.png ├── losses.py ├── unet_model.py └── utils.py /README.md: -------------------------------------------------------------------------------- 1 | # Counting Trees using Satellite Images 2 | 3 | ## 1. Introduction: 4 | 5 | Counting trees manually from satellite images above is a very tedious task. Fortunately, there are several techniques for automating tree counting. Morphological operations and classical segmentation algorithms like the watershed algorithm have been applied to tree counting with limited success so far. However, in case of dense areas, the trees are more densely packed and the crowns of the tress often overlap. These areas probably show different forest characteristics, such as differences in crown structure, species diversity, openness of tree crowns. This makes the issue more challenging. Therefore the tree counting algorithm has to be more robust and intelligent. This is where deep learning comes into play. 6 | 7 | **This study investigates the aspect of localizing and counting trees to create an inventory of incoming and outgoing trees that may not be able to be documented and recorded in the tree register during the annual tree inspections due to extensive felling or other reasons.** 8 | 9 | 10 | 11 | ## 2. Dataset and Processing: 12 | 13 | Satellite images are usually very large and have more than three channels. Our dataset consist of satellite images (848 × 837 pixels and eight channel) and labeled masks ( has 848 × 837 pixels and five channel) which are hand label by the analysts with image labeling tools to present: 14 | 15 | 16 | 17 | 1. Buildings 18 | 19 | 2. Roads and Tracks 20 | 21 | 3. Tress 22 | 23 | 4. Crops 24 | 25 | 5. Water 26 | 27 | 28 | 29 | Below you see one of the satellite images and the corresponding labels: 30 | 31 | 32 |

33 | the satellite images and the corresponding labels 34 |

35 | 36 | 37 | **In order to create training and validation dataset, the steps below were implemented:** 38 | 39 | 1. When reading the satellie images and it's corresponding lables, 20 percent of each image and label was assigned to the evaluation data set. 40 | 2. Once the training dataset and the validation dataset are created, a random window with a predefined size moves over the images and labels of the training dataset and the validation dataset to create the predefined number of patches. For example, with a window size of 160 and 4000 patches for the training data set, we have a shape of (4000, 160, 160, 8) for the training images and a shape of (4000, 160, 160, 5) for the training labels. 41 | 3. Since we will focus on counting the trees in this study, the four other channels of labels, namely buildings, roads and tracks, crops and water will be removed. i.e., the shape of the training labels(4000, 160, 160, 5) explained above will be (4000, 160, 160, 1). 42 | 43 | ## 3. Models: 44 | 45 | There are various deep learning segmentation methods like [Semantic Segmentation](https://medium.com/analytics-vidhya/deep-learning-semantic-segmentation-networks-18148e2cf0fb) and [Instance Segmentation](https://medium.com/analytics-vidhya/deep-learning-instance-segmentation-networks-2aa71c920b5b), each of which has leading models. In this phase of the study we decieded for the U-net which has attracted many attentions in the last few years and uses fully convolutional networks to perform the task of **Semantic segmentation**. 46 | 47 | The first U-Net was built by Olaf Ranneberger et al. at the University of Freiburg for the segmentation of biomedical images . Then, it was used in many other architectures like Pix2Pix network to solve challenging problems. As seen below, the architecture of the U-Net looks like a ‘U’, which acknowledges its name. 48 | 49 | 50 |

51 | Unet 52 |

53 | 54 | 55 | Furthermore, this architecture consists of two sections, including: 56 | 57 | 1. The contraction section which is used to capture the context in the image and increase “What”(Semantic) and decrease “Where”(Spatial). 58 | 59 | 2. The expansion section that enables precise localization. 60 | 61 | **After [implementing the U-net model](https://github.com/A2Amir/Counting-Trees-through-Satellite-Images/blob/main/unet_model.py) with the input of (number of batches, window size, windiw size, number of channels or bands) and the output of (number of patches, window size, window size, numbur of lables) explained above, we should choose a loss function and valuation metrics to evaluate the model during training.** 62 | 63 | You can see the structure of the U-net model below: 64 | 65 | 66 |

67 | Unet 68 |

69 | 70 | 71 | ## 4. Loss function and Evaluation Metrics: 72 | 73 | There are many loss functions to use for semantic segmentation problems (some of them are implemented in the [Losses.py](https://github.com/A2Amir/Counting-Trees-through-Satellite-Images/blob/main/losses.py) file) but the most useful is **Binary Cross Entropy**. Cross entropy is better suited for a classification problem, and it will provide results with better cleanliness within each class. Hence we decided for the binary cross entropy as the loss function. 74 | 75 | Other metrics such as accuracy, precision, and recall were used to evaluate the U-net model with the evaluation data set in order to tune hyper parameters. You can look at the [losses file](https://github.com/A2Amir/Counting-Trees-through-Satellite-Images/blob/main/losses.py) to see the detailed implemented codes. 76 | 77 | ## 5. Test the Model: 78 | 79 | After training and fine-tuning the U-Net model, we got a **validation loss of 0.1388, validation accuracy of 0.9447, validation precision of 0.9757 and validation recall of 0.7551.** 80 | 81 | 82 | We then made predictions about the unseen data in order to count the number of trees in the satellite images. The model makes a prediction of where the trees are located (localization), but to count the number of trees we used the [measure.label](https://scikit-image.org/docs/stable/api/skimage.measure.html#skimage.measure.label) function from the Scikit image library, which labels connected regions of an integer array. 83 | 84 | Below is depicted some of the predictions that the model is made: 85 | 86 | 87 |

88 | Unet 89 | Unet 90 | Unet 91 | Unet 92 | Unet 93 | 94 | 95 |

96 | 97 | 98 | 99 | # 6. Structure 100 | Below you can find the file structure of the [github](https://github.com/A2Amir/Counting-Trees-through-Satellite-Images) project: 101 |

102 | 
103 |       - data
104 |       | - gt_mband
105 |       | |- 01.tif  # labels of satellite images
106 |       | |- 02.tif  # labels of satellite images      
107 |       | - mband
108 |       | |- 01.tif  # satellite images with the 8 bands
109 |       | |- 02.tif  # satellite images with the 8 bands
110 | 
111 |       - imgs
112 |       |- 1.png  # Readme images
113 |       |- 1.png  # Readme images
114 |       
115 |       - logs
116 |       | -  UNet_(11-26-2020 , 19_51_28)  # the logs of the models during training
117 |          
118 |       
119 |       - models
120 |       | -  UNet_(11-26-2020 , 19_51_28)  # the weights of the trained model
121 | 
122 |       - Main.ipynb    # main code
123 |       - README.md
124 |       - losses.py     # losses code
125 |       - unet_model.py # U-net model code
126 |       - utils.py      # utils code
127 |       
128 | 
129 | 130 | ## 7. Conclusion: 131 | 132 | The model for deep learning and the model for machine learning depends strongly on the quality and quantity of the training data. In a sense, the greater the amount of data we enter into the model, the better the performance we achieve. 133 | -------------------------------------------------------------------------------- /data/gt_mband/.gitattributes: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:51a3c67bf504a08204c039edb9fc05412f8f336b3570fa4f0f410bde183deabf 3 | size 41 4 | -------------------------------------------------------------------------------- /data/gt_mband/01.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:ba2674b9011d522d1ac2c02f2ddf2df28e1d53a58376f81ce8cbdb8fe2706cb6 3 | size 3562260 4 | -------------------------------------------------------------------------------- /data/gt_mband/02.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:88c7cb879941c03f4e740abc9e280eba6e5e33a634b2f737daf0503e84fad9a6 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/03.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:fd5154c7a0483b0c8bcc707cc7c6a7eb63ce9117412b24db8d0420669c3c16bc 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/04.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:3e8fd86ab05e20e82ef1d212e4c737edea8a0f920fecda0f841d128e1a6c71a3 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/05.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:3747652ae2c2eecad660682169649e3ce5cb3140dcb2a636bc5f549576b0cad8 3 | size 3562260 4 | -------------------------------------------------------------------------------- /data/gt_mband/06.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:8b14f35629746c1474ed65d668d854935e65b5f16e2b0a167ce4ef1323def956 3 | size 3562260 4 | -------------------------------------------------------------------------------- /data/gt_mband/07.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:9b044b3338de0ef4224b48c6f587b75b7ae7db796bdd60356c12367400626322 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/08.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:d6cb805878abadcaca2f686ab00155b6b6bcf2b512ee32ce7c9339425b76ca3d 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/09.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:20583a34bc97d4eb8c674028f9e24fd6c9bde854361777d3dabd62a3384c7bae 3 | size 3553890 4 | -------------------------------------------------------------------------------- /data/gt_mband/10.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:b48d603a2ae1dd5d934f935250ae2b3aba5ed6db9b7891600236d5d27ed250cb 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/11.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:3e651aaa64d877dac014da1b9463cbe7b05427067fb4f1f41b91200702c92e2a 3 | size 3562260 4 | -------------------------------------------------------------------------------- /data/gt_mband/12.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:24e0efa7eb5576aa8b289d72dcaab6eb19811ec6adf4527e0d82ac0767eb5e25 3 | size 3553890 4 | -------------------------------------------------------------------------------- /data/gt_mband/13.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:534b7fe929eda3292d5c29e8bf3cc0838ba77c99cef796905614ba5e4bcef6bc 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/14.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:c09470c777a89e42af6163038a6ac8279be1991ed8caf51ffe09ba21bb919a2f 3 | size 3553890 4 | -------------------------------------------------------------------------------- /data/gt_mband/15.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:37cd1d8f104ca5f589b7b7c2cbbb0b15228b5e999b7bd8e69326c55296be3f0d 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/16.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:57e73ad8b711504945056ceec0c605848b6deebcdf7f6748b231a3660d1db9fc 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/17.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:31b066c5d48be15666e9b5ace3c6ea2719787d6d41b6a35580129e237c261bc1 3 | size 3553890 4 | -------------------------------------------------------------------------------- /data/gt_mband/18.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:ef1680d25816ea48f069ab5e5ef187fef88ac8f5691841770e8539f32acb6568 3 | size 3541334 4 | -------------------------------------------------------------------------------- /data/gt_mband/19.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:9fc16231c815289e7b8d5cbbb89a4e12592f1271d8a8cf2af3fd35acf31b0395 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/20.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:e88bde8624dbc924cad85d325d0a727e8c33810bcaddb63c22542920bf66a009 3 | size 3553890 4 | -------------------------------------------------------------------------------- /data/gt_mband/21.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:4a33d73edf796c9371112bc24f7068da982c8d53322b311135d6a7d412109335 3 | size 3549704 4 | -------------------------------------------------------------------------------- /data/gt_mband/22.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:928a60ad182e8dd2ba8598e5a2b646c7bf76db0f04b288a70e0736396865f763 3 | size 3553890 4 | -------------------------------------------------------------------------------- /data/gt_mband/23.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:3e796f37fc36a7ba3ae56f516c6a8684c514628cd60170d59cb042e1c4c9837d 3 | size 3499474 4 | -------------------------------------------------------------------------------- /data/gt_mband/24.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:a3bc15f9ab0bd599e9381b4d6937518abdcc1d70c74c68440e486793ed4a4a9b 3 | size 3553890 4 | -------------------------------------------------------------------------------- /data/mband/.gitattributes: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:51a3c67bf504a08204c039edb9fc05412f8f336b3570fa4f0f410bde183deabf 3 | size 41 4 | -------------------------------------------------------------------------------- /data/mband/01.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:f5a04885595dc3af24a6aebfab88f6eaba04f42bc2218eaa31f1f49d8151d3b5 3 | size 11397830 4 | -------------------------------------------------------------------------------- /data/mband/02.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:0ae052297d669b2cd84869f1bdc620846df0226112ae10c5cd275f3016d434fd 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/03.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:3dfd4ca112054d85cd7b68e84f47d711036252a45ef0e676c969fef246c0c7af 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/04.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:02ef2ce495496d705650c249eae62aeab41e538ad006697625da8ec0bb224170 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/05.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:01eebcd63398dcd60a19e04d4b9199e6e9e9fa9607027d6c4c5c39b8ee8d1c74 3 | size 11397830 4 | -------------------------------------------------------------------------------- /data/mband/06.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:119f1b4523af1d982ae251d91ab6223434699a718156ea41c788476b6c3273cf 3 | size 11397830 4 | -------------------------------------------------------------------------------- /data/mband/07.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:516bb521e1d29293cd50fe0cb2b5f29cdd1706a5308c8e32547b72373744dae7 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/08.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:9271bb8210febf9a43f226e3b2831f047367588578e5ed0eeaae3cf9e176ef1c 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/09.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:5f26574140071c357a467ee0ad30ad3970f7ce51240fa840a1778d5fbdae6fcc 3 | size 11371046 4 | -------------------------------------------------------------------------------- /data/mband/10.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:84d21a3247117611266fc1ee39cd60b54d7284da764f0e972f9edfeba99b569b 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/11.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:2264251ba2d23fa53e1001b1eb2ca8ca4f4b9c19df7efc14b5642b7438648e46 3 | size 11397830 4 | -------------------------------------------------------------------------------- /data/mband/12.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:49e56635a85fea6b0f2ec639e61536fc9eb1e12fb477321ce8e4af8449fc8cb9 3 | size 11371046 4 | -------------------------------------------------------------------------------- /data/mband/13.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:b1d2b9831549d02c8f9c972b01a309a7d9e79a29c0f0268e17cf67defed74883 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/14.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:090471685c4b76455568ad21036efa6bd9f453b8ede45d6ca5ace8e9496644aa 3 | size 11371046 4 | -------------------------------------------------------------------------------- /data/mband/15.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:b2bf48ffa9b4fdc754a3f486d67cf179b19ec008413e390e11d27c147bc08c36 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/16.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:daf7aaf783cd4704a3a09aef313e7a451208e022d765c9324eddb63ec10afe1a 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/17.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:da92671f3dabf53f0f637fb516c0d12d34b7c70cab870e9a79dd90cc72ac9c75 3 | size 11371046 4 | -------------------------------------------------------------------------------- /data/mband/18.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:ad92dede4b552950955e342f67bce902c11b4acd4253f1aa28fdd736238b8532 3 | size 11330870 4 | -------------------------------------------------------------------------------- /data/mband/19.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:0ca7e00bac911bf144963a9ae208b041e97742d44d90884438ca235a9dd68463 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/20.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:5dfe0c199445af3f5f600ab445225f6ff45779a4015b278d78935bf1d010ebc1 3 | size 11371046 4 | -------------------------------------------------------------------------------- /data/mband/21.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:c0307f880314a3f4b991ed427ddb604ae5fb263266155dac622c0b42ccc69779 3 | size 11357654 4 | -------------------------------------------------------------------------------- /data/mband/22.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:a44104d451c1a74f77fa7489cfb24bf6ef590a1bdd894ac697278ca75cdc051f 3 | size 11371046 4 | -------------------------------------------------------------------------------- /data/mband/23.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:f876f0c59aec0ed2834664c141d2804ddcdca6128cacbd66afe8e6f75d57361c 3 | size 11196918 4 | -------------------------------------------------------------------------------- /data/mband/24.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:2aa2289165cafa81ac3d14819b1570a3afc40a337ece93af4d29e5010232712b 3 | size 11371046 4 | -------------------------------------------------------------------------------- /data/mband/test.tif: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:1da4747ea8eeba1e61b55a909723d6827daca62a6dae899c0e3ae6e54a03234d 3 | size 11357654 4 | -------------------------------------------------------------------------------- /imgs/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/1.png -------------------------------------------------------------------------------- /imgs/2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/2.png -------------------------------------------------------------------------------- /imgs/3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/3.png -------------------------------------------------------------------------------- /imgs/4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/4.png -------------------------------------------------------------------------------- /imgs/5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/5.png -------------------------------------------------------------------------------- /imgs/6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/6.png -------------------------------------------------------------------------------- /imgs/7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/7.PNG -------------------------------------------------------------------------------- /imgs/9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/A2Amir/Counting-Trees-using-Satellite-Images/1f38acbf4eacfda72b8b20c54d50ab9ce3adfa45/imgs/9.png -------------------------------------------------------------------------------- /losses.py: -------------------------------------------------------------------------------- 1 | import tensorflow.keras.backend as K 2 | import numpy as np 3 | import tensorflow as tf 4 | 5 | def tversky(y_true, y_pred, alpha=0.6, beta=0.4): 6 | """ 7 | Function to calculate the Tversky loss for imbalanced data 8 | :param prediction: the logits 9 | :param ground_truth: the segmentation ground_truth 10 | :param alpha: weight of false positives 11 | :param beta: weight of false negatives 12 | :param weight_map: 13 | :return: the loss 14 | """ 15 | 16 | y_t = y_true[...,0] 17 | y_t = y_t[...,np.newaxis] 18 | # weights 19 | y_weights = y_true[...,1] 20 | y_weights = y_weights[...,np.newaxis] 21 | 22 | ones = 1 23 | p0 = y_pred # proba that voxels are class i 24 | p1 = ones - y_pred # proba that voxels are not class i 25 | g0 = y_t 26 | g1 = ones - y_t 27 | 28 | tp = tf.reduce_sum(y_weights * p0 * g0) 29 | fp = alpha * tf.reduce_sum(y_weights * p0 * g1) 30 | fn = beta * tf.reduce_sum(y_weights * p1 * g0) 31 | 32 | EPSILON = 0.00001 33 | numerator = tp 34 | denominator = tp + fp + fn + EPSILON 35 | score = numerator / denominator 36 | return 1.0 - tf.reduce_mean(score) 37 | 38 | def accuracy(y_true, y_pred): 39 | """compute accuracy""" 40 | y_t = y_true[...,0] 41 | y_t = y_t[...,np.newaxis] 42 | return K.equal(K.round(y_t), K.round(y_pred)) 43 | 44 | def dice_coef(y_true, y_pred, smooth=0.0000001): 45 | """compute dice coef""" 46 | y_t = y_true[...,0] 47 | y_t = y_t[...,np.newaxis] 48 | intersection = K.sum(K.abs(y_t * y_pred), axis=-1) 49 | union = K.sum(y_t, axis=-1) + K.sum(y_pred, axis=-1) 50 | return K.mean((2. * intersection + smooth) / (union + smooth), axis=-1) 51 | 52 | def dice_loss(y_true, y_pred): 53 | """compute dice loss""" 54 | y_t = y_true[...,0] 55 | y_t = y_t[...,np.newaxis] 56 | return 1 - dice_coef(y_t, y_pred) 57 | 58 | def true_positives(y_true, y_pred): 59 | """compute true positive""" 60 | y_t = y_true[...,0] 61 | y_t = y_t[...,np.newaxis] 62 | return K.round(y_t * y_pred) 63 | 64 | def false_positives(y_true, y_pred): 65 | """compute false positive""" 66 | y_t = y_true[...,0] 67 | y_t = y_t[...,np.newaxis] 68 | return K.round((1 - y_t) * y_pred) 69 | 70 | def true_negatives(y_true, y_pred): 71 | """compute true negative""" 72 | y_t = y_true[...,0] 73 | y_t = y_t[...,np.newaxis] 74 | return K.round((1 - y_t) * (1 - y_pred)) 75 | 76 | def false_negatives(y_true, y_pred): 77 | """compute false negative""" 78 | y_t = y_true[...,0] 79 | y_t = y_t[...,np.newaxis] 80 | return K.round((y_t) * (1 - y_pred)) 81 | 82 | 83 | def recall(y_true, y_pred): 84 | """compute sensitivity (recall)""" 85 | y_t = y_true[...,0] 86 | y_t = y_t[...,np.newaxis] 87 | tp = true_positives(y_t, y_pred) 88 | fn = false_negatives(y_t, y_pred) 89 | return K.sum(tp) / (K.sum(tp) + K.sum(fn)) 90 | 91 | def precision(y_true, y_pred): 92 | """compute specificity (precision)""" 93 | y_t = y_true[...,0] 94 | y_t = y_t[...,np.newaxis] 95 | tn = true_negatives(y_t, y_pred) 96 | fp = false_positives(y_t, y_pred) 97 | return K.sum(tn) / (K.sum(tn) + K.sum(fp)) 98 | 99 | def weighted_binary_crossentropy(y_true, y_pred): 100 | class_loglosses = K.mean(K.binary_crossentropy(y_true, y_pred), axis=[0, 1, 2]) 101 | return K.sum(class_loglosses ) 102 | # In[ ]: 103 | 104 | 105 | 106 | 107 | -------------------------------------------------------------------------------- /unet_model.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | from tensorflow.keras.models import Model 4 | from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, concatenate, Conv2DTranspose, BatchNormalization, Dropout 5 | from tensorflow.keras.optimizers import Adam 6 | from tensorflow.keras.utils import plot_model 7 | from tensorflow.keras import backend as K 8 | from losses import * 9 | from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard 10 | from tensorflow.keras.models import load_model 11 | 12 | def unet_model(n_classes=5, im_sz=160, n_channels=8, n_filters_start=32, growth_factor=2, upconv=True): 13 | 14 | droprate=0.25 15 | n_filters = n_filters_start 16 | inputs = Input((im_sz, im_sz, n_channels)) 17 | #inputs = BatchNormalization()(inputs) 18 | conv1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(inputs) 19 | conv1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv1) 20 | pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) 21 | #pool1 = Dropout(droprate)(pool1) 22 | 23 | n_filters *= growth_factor 24 | pool1 = BatchNormalization()(pool1) 25 | conv2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool1) 26 | conv2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv2) 27 | pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) 28 | pool2 = Dropout(droprate)(pool2) 29 | 30 | n_filters *= growth_factor 31 | pool2 = BatchNormalization()(pool2) 32 | conv3 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool2) 33 | conv3 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv3) 34 | pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) 35 | pool3 = Dropout(droprate)(pool3) 36 | 37 | n_filters *= growth_factor 38 | pool3 = BatchNormalization()(pool3) 39 | conv4_0 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool3) 40 | conv4_0 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv4_0) 41 | pool4_1 = MaxPooling2D(pool_size=(2, 2))(conv4_0) 42 | pool4_1 = Dropout(droprate)(pool4_1) 43 | 44 | n_filters *= growth_factor 45 | pool4_1 = BatchNormalization()(pool4_1) 46 | conv4_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool4_1) 47 | conv4_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv4_1) 48 | pool4_2 = MaxPooling2D(pool_size=(2, 2))(conv4_1) 49 | pool4_2 = Dropout(droprate)(pool4_2) 50 | 51 | n_filters *= growth_factor 52 | conv5 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool4_2) 53 | conv5 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv5) 54 | 55 | n_filters //= growth_factor 56 | if upconv: 57 | up6_1 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv5), conv4_1]) 58 | else: 59 | up6_1 = concatenate([UpSampling2D(size=(2, 2))(conv5), conv4_1]) 60 | up6_1 = BatchNormalization()(up6_1) 61 | conv6_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up6_1) 62 | conv6_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv6_1) 63 | conv6_1 = Dropout(droprate)(conv6_1) 64 | 65 | n_filters //= growth_factor 66 | if upconv: 67 | up6_2 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv6_1), conv4_0]) 68 | else: 69 | up6_2 = concatenate([UpSampling2D(size=(2, 2))(conv6_1), conv4_0]) 70 | up6_2 = BatchNormalization()(up6_2) 71 | conv6_2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up6_2) 72 | conv6_2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv6_2) 73 | conv6_2 = Dropout(droprate)(conv6_2) 74 | 75 | n_filters //= growth_factor 76 | if upconv: 77 | up7 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv6_2), conv3]) 78 | else: 79 | up7 = concatenate([UpSampling2D(size=(2, 2))(conv6_2), conv3]) 80 | up7 = BatchNormalization()(up7) 81 | conv7 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up7) 82 | conv7 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv7) 83 | conv7 = Dropout(droprate)(conv7) 84 | 85 | n_filters //= growth_factor 86 | if upconv: 87 | up8 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv7), conv2]) 88 | else: 89 | up8 = concatenate([UpSampling2D(size=(2, 2))(conv7), conv2]) 90 | up8 = BatchNormalization()(up8) 91 | conv8 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up8) 92 | conv8 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv8) 93 | conv8 = Dropout(droprate)(conv8) 94 | 95 | n_filters //= growth_factor 96 | if upconv: 97 | up9 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv8), conv1]) 98 | else: 99 | up9 = concatenate([UpSampling2D(size=(2, 2))(conv8), conv1]) 100 | conv9 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up9) 101 | conv9 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv9) 102 | 103 | conv10 = Conv2D(n_classes, (1, 1), activation='sigmoid')(conv9) 104 | 105 | model = Model(inputs=inputs, outputs=conv10) 106 | 107 | 108 | model.compile(optimizer=Adam(), loss=weighted_binary_crossentropy, metrics= [accuracy, precision, recall]) 109 | 110 | return model 111 | 112 | 113 | def get_callbacks(): 114 | 115 | timestr = datetime.datetime.now().strftime("(%m-%d-%Y , %H:%M:%S)") 116 | model_dir = os.path.join('./models','UNet_{}'.format(timestr)) 117 | checkpoint = ModelCheckpoint(model_dir, monitor='val_loss', verbose=2, 118 | save_best_only=True, mode='min', save_weights_only = False) 119 | 120 | log_dir = os.path.join('./logs','UNet_{}'.format(timestr)) 121 | tensorboard = TensorBoard(log_dir=log_dir, histogram_freq=0, write_graph=True, write_grads=False, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None, embeddings_data=None, update_freq='epoch') 122 | 123 | callbacks_list = [checkpoint, tensorboard] 124 | 125 | return callbacks_list 126 | 127 | def model_load(model_path): 128 | from losses import accuracy,precision,recall,weighted_binary_crossentropy 129 | from tensorflow.keras.optimizers import Adam 130 | 131 | from tensorflow.keras.models import load_model 132 | 133 | 134 | model = load_model(model_path, custom_objects={ 'accuracy':accuracy , 'precision': precision, 'recall':recall}, compile=False) 135 | 136 | 137 | model.compile(optimizer=Adam(), loss=weighted_binary_crossentropy, metrics=[accuracy, precision, recall]) 138 | print("Model is loaded..") 139 | return model 140 | -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import random 2 | import cv2 3 | import numpy as np 4 | import tifffile as tiff 5 | import earthpy.plot as ep 6 | import matplotlib.pyplot as plt 7 | from skimage import measure 8 | from skimage import filters 9 | 10 | def normalize(img): 11 | min = img.min() 12 | max = img.max() 13 | x = 2.0 * (img - min) / (max - min) - 1.0 14 | return x 15 | 16 | def get_rand_patch(img, mask, sz=160, channel = None): 17 | """ 18 | :param img: ndarray with shape (x_sz, y_sz, num_channels) 19 | :param mask: binary ndarray with shape (x_sz, y_sz, num_classes) 20 | :param sz: size of random patch 21 | :param Channels 0: Buildings , 1: Roads & Tracks, 2: Trees , 3: Crops, 4: Water 22 | :return: patch with shape (sz, sz, num_channels) 23 | 24 | 25 | """ 26 | assert len(img.shape) == 3 and img.shape[0] > sz and img.shape[1] > sz and img.shape[0:2] == mask.shape[0:2] 27 | xc = random.randint(0, img.shape[0] - sz) 28 | yc = random.randint(0, img.shape[1] - sz) 29 | patch_img = img[xc:(xc + sz), yc:(yc + sz)] 30 | patch_mask = mask[xc:(xc + sz), yc:(yc + sz)] 31 | 32 | # Apply some random transformations 33 | random_transformation = np.random.randint(1,8) 34 | if random_transformation == 1: # reverse first dimension 35 | patch_img = patch_img[::-1,:,:] 36 | patch_mask = patch_mask[::-1,:,:] 37 | elif random_transformation == 2: # reverse second dimension 38 | patch_img = patch_img[:,::-1,:] 39 | patch_mask = patch_mask[:,::-1,:] 40 | elif random_transformation == 3: # transpose(interchange) first and second dimensions 41 | patch_img = patch_img.transpose([1,0,2]) 42 | patch_mask = patch_mask.transpose([1,0,2]) 43 | elif random_transformation == 4: 44 | patch_img = np.rot90(patch_img, 1) 45 | patch_mask = np.rot90(patch_mask, 1) 46 | elif random_transformation == 5: 47 | patch_img = np.rot90(patch_img, 2) 48 | patch_mask = np.rot90(patch_mask, 2) 49 | elif random_transformation == 6: 50 | patch_img = np.rot90(patch_img, 3) 51 | patch_mask = np.rot90(patch_mask, 3) 52 | else: 53 | pass 54 | if channel=='all': 55 | return patch_img, patch_mask 56 | 57 | if channel !='all': 58 | patch_mask = patch_mask[:,:,channel] 59 | return patch_img, patch_mask 60 | 61 | 62 | 63 | def get_patches(x_dict, y_dict, n_patches, sz=160, channel = 'all'): 64 | """ 65 | :param Channels 0: Buildings , 1: Roads & Tracks, 2: Trees , 3: Crops, 4: Water or 'all' 66 | 67 | """ 68 | x = list() 69 | y = list() 70 | total_patches = 0 71 | while total_patches < n_patches: 72 | img_id = random.sample(x_dict.keys(), 1)[0] 73 | img = x_dict[img_id] 74 | mask = y_dict[img_id] 75 | img_patch, mask_patch = get_rand_patch(img, mask, sz, channel) 76 | x.append(img_patch) 77 | y.append(mask_patch) 78 | total_patches += 1 79 | print('Generated {} patches'.format(total_patches)) 80 | return np.array(x), np.array(y) 81 | 82 | def load_data(path = './data/'): 83 | """ 84 | :param path: the path of the dataset which includes mband and gt_mband folders 85 | :return: X_DICT_TRAIN, Y_DICT_TRAIN, X_DICT_VALIDATION, Y_DICT_VALIDATION 86 | """ 87 | trainIds = [str(i).zfill(2) for i in range(1, 25)] # all availiable ids: from "01" to "24" 88 | 89 | X_DICT_TRAIN = dict() 90 | Y_DICT_TRAIN = dict() 91 | X_DICT_VALIDATION = dict() 92 | Y_DICT_VALIDATION = dict() 93 | 94 | print('Reading images') 95 | for img_id in trainIds: 96 | 97 | img_m = normalize(tiff.imread(path + 'mband/{}.tif'.format(img_id)).transpose([1, 2, 0])) 98 | mask = tiff.imread(path + 'gt_mband/{}.tif'.format(img_id)).transpose([1, 2, 0]) / 255 99 | train_xsz = int(3/4 * img_m.shape[0]) # use 75% of image as train and 25% for validation 100 | X_DICT_TRAIN[img_id] = img_m[:train_xsz, :, :] 101 | Y_DICT_TRAIN[img_id] = mask[:train_xsz, :, :] 102 | X_DICT_VALIDATION[img_id] = img_m[train_xsz:, :, :] 103 | Y_DICT_VALIDATION[img_id] = mask[train_xsz:, :, :] 104 | #print(img_id + ' read') 105 | print('Images are read') 106 | return X_DICT_TRAIN, Y_DICT_TRAIN, X_DICT_VALIDATION, Y_DICT_VALIDATION 107 | 108 | def plot_train_data(X_DICT_TRAIN, Y_DICT_TRAIN, image_number = 12): 109 | 110 | labels =['Orginal Image with the 8 bands', 'Ground Truths: Buildings', 'Ground Truths: Roads & Tracks', 'Ground Truths: Trees' , 'Ground Truths: Crops', 'Ground Truths: Water'] 111 | 112 | image_number = str(image_number).zfill(2) 113 | number_of_GTbands = Y_DICT_TRAIN[image_number].shape[2] 114 | f, axarr = plt.subplots(1, number_of_GTbands + 1, figsize=(25,25)) 115 | 116 | band_indices = [0, 1, 2] 117 | print('Image shape is: ',X_DICT_TRAIN[image_number].shape) 118 | print("Ground Truth's shape is: ",Y_DICT_TRAIN[image_number].shape) 119 | 120 | ep.plot_rgb(X_DICT_TRAIN[image_number].transpose([2,0,1]), 121 | rgb=band_indices, 122 | title=labels[0], 123 | stretch=True, 124 | ax=axarr[0]) 125 | 126 | for i in range(0, number_of_GTbands): 127 | axarr[i+1].imshow(Y_DICT_TRAIN[image_number][:,:,i]) 128 | #print(labels[i+1]) 129 | axarr[i+1].set_title(labels[i+1]) 130 | 131 | plt.show() 132 | 133 | 134 | def Abs_sobel_thresh(image,orient='x',thresh=(40,250) ,sobel_kernel=3): 135 | gray=image#cv2.cvtColor(image,cv2.COLOR_RGB2GRAY) 136 | if orient=='x': 137 | #the operator calculates the derivatives of the pixel values along the horizontal direction to make a filter. 138 | sobel=cv2.Sobel(gray,cv2.CV_64F,1,0,ksize= sobel_kernel) 139 | if (orient=='y'): 140 | sobel=cv2.Sobel(gray,cv2.CV_64F,0,1,ksize= sobel_kernel) 141 | abs_sobel=np.absolute(sobel) 142 | scaled_sobel=(255*abs_sobel/np.max(abs_sobel)) 143 | grad_binary=np.zeros_like(scaled_sobel) 144 | grad_binary[(scaled_sobel>=thresh[0])&(scaled_sobel<=thresh[1])]=1 145 | return grad_binary 146 | 147 | 148 | 149 | def Mag_thresh(image, sobel_kernel=3, mag_thresh=(0, 255)): 150 | gray=image#cv2.cvtColor(image,cv2.COLOR_RGB2GRAY) 151 | sobelx=cv2.Sobel(gray,cv2.CV_64F,1,0,ksize=sobel_kernel) 152 | sobely=cv2.Sobel(gray,cv2.CV_64F,1,0,ksize=sobel_kernel) 153 | 154 | gradmag=np.sqrt(sobelx**2+sobely**2) 155 | scale_factor = np.max(gradmag)/255 156 | gradmag=np.uint8(gradmag/scale_factor ) 157 | mag_binary=np.zeros_like(gradmag) 158 | mag_binary[(gradmag>=mag_thresh[0])&(gradmag<=mag_thresh[1])]=1 159 | # Apply threshold 160 | return mag_binary 161 | 162 | def Dir_threshold(image, sobel_kernel=3, thresh=(0, np.pi/2)): 163 | gray=image#cv2.cvtColor(image,cv2.COLOR_RGB2GRAY) 164 | sobelx=cv2.Sobel(gray,cv2.CV_64F,1,0,ksize=sobel_kernel) 165 | sobely=cv2.Sobel(gray,cv2.CV_64F,1,0,ksize=sobel_kernel) 166 | abs_sobelx=np.absolute(sobelx) 167 | abs_sobely=np.absolute(sobely) 168 | abs_graddir=np.arctan(abs_sobely,abs_sobelx) 169 | dir_binary=np.zeros_like(abs_graddir) 170 | dir_binary[(abs_graddir>=thresh[0])&(abs_graddir<=thresh[1])]=1 171 | # Calculate gradient direction 172 | # Apply threshold 173 | return dir_binary 174 | 175 | def Combined_thresholds(gradx,grady,mag_binary,dir_binary): 176 | combined=np.zeros_like(dir_binary) 177 | combined[(gradx==1)|(grady==1) |(mag_binary==1)|(dir_binary==1)]=1 178 | return combined 179 | 180 | 181 | def BilateralFilter(image, kernel_size,sigmaSpace,sigmaColor): # bilateral filter can keep edges sharp while removing noises 182 | img=np.copy(image) 183 | img=cv2.bilateralFilter(img,kernel_size,sigmaColor,sigmaSpace) 184 | #plt.imshow(img) 185 | return img 186 | 187 | 188 | def Erosion(image, filter_size = 2, iteration= 1): 189 | img=np.copy(image) 190 | kernel = np.ones((filter_size,filter_size),np.uint8) 191 | erosion=cv2.erode(img,kernel,iterations=iteration) 192 | return erosion 193 | 194 | def Opening(image, filter_size): 195 | #Opening is just another name of erosion followed by dilation 196 | img=np.copy(image) 197 | kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(filter_size,filter_size)) 198 | opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel) 199 | return opening 200 | 201 | 202 | 203 | def Closing(image,k):# closing is useful to detect the overall contour of a figure and opening is suitable to detect subpatterns. 204 | kernel = np.ones((k, k), np.uint8) 205 | img=np.copy(image) 206 | img_close = cv2.morphologyEx(img, op= cv2.MORPH_CLOSE,kernel=kernel) 207 | return img_close 208 | 209 | def Denoise(image,k): 210 | img=np.copy(image) 211 | struct=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(k,k)) 212 | img=cv2.morphologyEx(img,cv2.MORPH_OPEN,struct) 213 | return img 214 | 215 | def Binary(image, threshold, max_value = 1): 216 | img=np.copy(image) 217 | (t,masklayer)=cv2.threshold(img,threshold,max_value,cv2.THRESH_BINARY) 218 | return masklayer 219 | 220 | def Gaussian_filter(image, sigma =1): 221 | img=np.copy(image) 222 | blur = filters.gaussian(img, sigma=sigma) 223 | return blur 224 | 225 | def Find_threshold_otsu(image): 226 | t = filters.threshold_otsu(image) 227 | return t 228 | 229 | 230 | def ExtractObjects(image): 231 | img=np.copy(image) 232 | #kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(2,2)) 233 | #erosion=cv2.erode(img,kernel,iterations=1) 234 | #bliteralfilter=cv2.bilateralFilter(erosion,5,75,75) 235 | #(t,masklayer)=cv2.threshold(bliteralfilter,0,1,cv2.THRESH_BINARY|cv2.THRESH_OTSU) 236 | #denoising = Denoise(img,1) 237 | blob_labels=measure.label(img,background=0) 238 | number_of_objects=np.unique(blob_labels) 239 | return blob_labels,number_of_objects 240 | 241 | 242 | def post_processing(img): 243 | 244 | blur = Gaussian_filter(img, sigma=1) 245 | t = Find_threshold_otsu(blur) 246 | binary_img = Binary(blur,t) 247 | opened_img = Opening(binary_img, filter_size = 3) 248 | blob_labels,number_of_objects = ExtractObjects(opened_img) 249 | 250 | return opened_img, number_of_objects, blob_labels --------------------------------------------------------------------------------