The Netflora Project involves the application of geotechnologies in forest automation and carbon stock mapping in native forest areas in Western Amazonia. It is an initiative developed by Embrapa Acre with sponsorship from the JBS Fund for the Amazon.
12 |
13 |
Here we will discuss the "Forest Inventory using drones" component. Drones and artificial intelligence are used to automate stages of the forest inventory in identifying strategic species. More than 50,000 hectares of forest areas have already been mapped with the goal of collecting information to compose the Netflora dataset.
14 |
15 |
Expand
69 |
70 | * [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet)
71 | * [https://github.com/WongKinYiu/yolov7](https://github.com/WongKinYiu/yolov7)
72 |
--------------------------------------------------------------------------------
/README.pt.md:
--------------------------------------------------------------------------------
1 | # **Netflora**
2 |
3 |
4 |
5 | O Projeto Netflora envolve a aplicação de geotecnologias na automação florestal e no mapeamento dos estoques de carbono em áreas de floresta nativa na Amazônia Ocidental, é uma iniciativa desenvolvida pela Embrapa Acre com o apoio do Fundo JBS pela Amazônia.
6 |
7 |
Aqui vamos tratar do componente “Inventário Florestal com uso de drones”. Drones e inteligência artificial são utilizados para automatizar etapas do inventário florestal na identificação de espécies estratégicas. Mais de 50 mil hectares de áreas de floresta já foram mapeados com o objetivo de coletar informações para compor o dataset do Netflora.
8 |
9 |
10 |
11 |
12 |

13 |
14 |

15 |
16 |

17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 | ## Executando a Detecção
25 |
26 | ``!python detect.py --device 0 --weights model_weights.pt --img 1536``
27 |
28 | ## Vizualizando os Resultados da Detecção
29 |
30 | ``!python detect.py --device 0 --weights model_weights.pt --img 1536``
31 |
32 | ## Exemplos de Detecção pelos Algoritmos
33 |
34 |
35 |
36 |

37 |
38 |

39 |
40 |

41 |
42 |
43 |
44 | ## Web Site
45 |
46 | https://www.embrapa.br/acre/netflora
47 |
48 |
49 | ## Citação
50 |
51 |
52 | ## Licença
53 |
54 | Distribuído sob a licença GPL 3.0. Veja [LICENSE](LICENSE.md) para mais informações.
55 |
56 | ## Links Úteis
57 | - [Ortofoto exemplo](https://drive.google.com/drive/folders/1OcRel7fJHALwm9ZAdU3rSlFwV_4iaZnp?usp=sharing)
58 | - [Curso EAD](https://ava.sede.embrapa.br/course/view.php?id=470)
59 | - [Perguntas Frequentes (FAQ)](https://www.embrapa.br/web/portal/acre/tecnologias/netflora/perguntas-e-respostas)
60 | - [Embrapa Acre](https://www.embrapa.br/acre/)
61 | - [Fundo JBS pela Amazônia](https://fundojbsamazonia.org/)
62 |
63 |
64 |
65 | ## Agradencimentos
66 |
67 | Expandir
68 |
69 | * [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet)
70 | * [https://github.com/WongKinYiu/yolov7](https://github.com/WongKinYiu/yolov7)
71 |
--------------------------------------------------------------------------------
/detect.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: detect.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 |
9 | import argparse
10 | import time
11 | from pathlib import Path
12 |
13 | import cv2
14 | import torch
15 | import torch.backends.cudnn as cudnn
16 | from numpy import random
17 |
18 | from models.experimental import attempt_load
19 | from utils.datasets import LoadStreams, LoadImages
20 | from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \
21 | scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path
22 | from utils.plots import plot_one_box
23 | from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel
24 |
25 |
26 | def detect(save_img=False):
27 | source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace
28 | save_img = not opt.nosave and not source.endswith('.txt') # save inference images
29 | webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith(
30 | ('rtsp://', 'rtmp://', 'http://', 'https://'))
31 |
32 | # Directories
33 | save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run
34 | (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
35 |
36 | # Initialize
37 | set_logging()
38 | device = select_device(opt.device)
39 | half = device.type != 'cpu' # half precision only supported on CUDA
40 |
41 | # Load model
42 | model = attempt_load(weights, map_location=device) # load FP32 model
43 | stride = int(model.stride.max()) # model stride
44 | imgsz = check_img_size(imgsz, s=stride) # check img_size
45 |
46 | if trace:
47 | model = TracedModel(model, device, opt.img_size)
48 |
49 | if half:
50 | model.half() # to FP16
51 |
52 | # Second-stage classifier
53 | classify = False
54 | if classify:
55 | modelc = load_classifier(name='resnet101', n=2) # initialize
56 | modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval()
57 |
58 | # Set Dataloader
59 | vid_path, vid_writer = None, None
60 | if webcam:
61 | view_img = check_imshow()
62 | cudnn.benchmark = True # set True to speed up constant image size inference
63 | dataset = LoadStreams(source, img_size=imgsz, stride=stride)
64 | else:
65 | dataset = LoadImages(source, img_size=imgsz, stride=stride)
66 |
67 | # Get names and colors
68 | names = model.module.names if hasattr(model, 'module') else model.names
69 | colors = [[random.randint(0, 255) for _ in range(3)] for _ in names]
70 |
71 | # Run inference
72 | if device.type != 'cpu':
73 | model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
74 | old_img_w = old_img_h = imgsz
75 | old_img_b = 1
76 |
77 | t0 = time.time()
78 | for path, img, im0s, vid_cap in dataset:
79 | img = torch.from_numpy(img).to(device)
80 | img = img.half() if half else img.float() # uint8 to fp16/32
81 | img /= 255.0 # 0 - 255 to 0.0 - 1.0
82 | if img.ndimension() == 3:
83 | img = img.unsqueeze(0)
84 |
85 | # Warmup
86 | if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]):
87 | old_img_b = img.shape[0]
88 | old_img_h = img.shape[2]
89 | old_img_w = img.shape[3]
90 | for i in range(3):
91 | model(img, augment=opt.augment)[0]
92 |
93 | # Inference
94 | t1 = time_synchronized()
95 | with torch.no_grad(): # Calculating gradients would cause a GPU memory leak
96 | pred = model(img, augment=opt.augment)[0]
97 | t2 = time_synchronized()
98 |
99 | # Apply NMS
100 | pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms)
101 | t3 = time_synchronized()
102 |
103 | # Apply Classifier
104 | if classify:
105 | pred = apply_classifier(pred, modelc, img, im0s)
106 |
107 | # Process detections
108 | for i, det in enumerate(pred): # detections per image
109 | if webcam: # batch_size >= 1
110 | p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count
111 | else:
112 | p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0)
113 |
114 | p = Path(p) # to Path
115 | save_path = str(save_dir / p.name) # img.jpg
116 | txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt
117 | gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
118 | if len(det):
119 | # Rescale boxes from img_size to im0 size
120 | det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
121 |
122 | # Print results
123 | for c in det[:, -1].unique():
124 | n = (det[:, -1] == c).sum() # detections per class
125 | s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
126 |
127 | # Write results
128 | for *xyxy, conf, cls in reversed(det):
129 | if save_txt: # Write to file
130 | xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
131 | line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh) # label format
132 | with open(txt_path + '.txt', 'a') as f:
133 | f.write(('%g ' * len(line)).rstrip() % line + '\n')
134 |
135 | if save_img or view_img: # Add bbox to image
136 | label = f'{names[int(cls)]} {conf:.2f}'
137 | plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
138 |
139 | # Print time (inference + NMS)
140 | print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS')
141 |
142 | # Stream results
143 | if view_img:
144 | cv2.imshow(str(p), im0)
145 | cv2.waitKey(1) # 1 millisecond
146 |
147 | # Save results (image with detections)
148 | if save_img:
149 | if dataset.mode == 'image':
150 | cv2.imwrite(save_path, im0)
151 | print(f" The image with the result is saved in: {save_path}")
152 | else: # 'video' or 'stream'
153 | if vid_path != save_path: # new video
154 | vid_path = save_path
155 | if isinstance(vid_writer, cv2.VideoWriter):
156 | vid_writer.release() # release previous video writer
157 | if vid_cap: # video
158 | fps = vid_cap.get(cv2.CAP_PROP_FPS)
159 | w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
160 | h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
161 | else: # stream
162 | fps, w, h = 30, im0.shape[1], im0.shape[0]
163 | save_path += '.mp4'
164 | vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
165 | vid_writer.write(im0)
166 |
167 | if save_txt or save_img:
168 | s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
169 | #print(f"Results saved to {save_dir}{s}")
170 |
171 | print(f'Done. ({time.time() - t0:.3f}s)')
172 |
173 |
174 | if __name__ == '__main__':
175 | parser = argparse.ArgumentParser()
176 | parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)')
177 | parser.add_argument('--source', type=str, default='processing/output_tiles', help='source') # file/folder, 0 for webcam
178 | parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
179 | parser.add_argument('--conf-thres', type=float, default=0.01, help='object confidence threshold')
180 | parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS')
181 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
182 | parser.add_argument('--view-img', action='store_true', help='display results')
183 | parser.add_argument('--save-txt', default= 'save-txt', help='save results to *.txt')
184 | parser.add_argument('--save-conf', default= 'save-conf', help='save confidences in --save-txt labels')
185 | parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
186 | parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
187 | parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
188 | parser.add_argument('--augment', action='store_true', help='augmented inference')
189 | parser.add_argument('--update', action='store_true', help='update all models')
190 | parser.add_argument('--project', default='runs/detect', help='save results to project/name')
191 | parser.add_argument('--name', default='exp', help='save results to project/name')
192 | parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
193 | parser.add_argument('--no-trace', action='store_true', help='don`t trace model')
194 | opt = parser.parse_args()
195 | print(opt)
196 | #check_requirements(exclude=('pycocotools', 'thop'))
197 |
198 | with torch.no_grad():
199 | if opt.update: # update all models (to fix SourceChangeWarning)
200 | for opt.weights in ['yolov7.pt']:
201 | detect()
202 | strip_optimizer(opt.weights)
203 | else:
204 | detect()
205 |
--------------------------------------------------------------------------------
/inference/images/Acai.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/Acai.jpg
--------------------------------------------------------------------------------
/inference/images/PFMNs.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/PFMNs.jpg
--------------------------------------------------------------------------------
/inference/images/Palmeiras.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/Palmeiras.jpg
--------------------------------------------------------------------------------
/inference/images/detection.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/detection.gif
--------------------------------------------------------------------------------
/inference/images/ep01s002y2111n2733.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/ep01s002y2111n2733.jpg
--------------------------------------------------------------------------------
/inference/images/ep01s002y2111n2736.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/ep01s002y2111n2736.jpg
--------------------------------------------------------------------------------
/inference/images/ep01s002y2111n2739.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/ep01s002y2111n2739.jpg
--------------------------------------------------------------------------------
/inference/images/ep01s002y2111n2744.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/inference/images/ep01s002y2111n2744.jpg
--------------------------------------------------------------------------------
/json/categories.json:
--------------------------------------------------------------------------------
1 | {
2 | "categories": {
3 | "Açaí": [
4 | {"species_id": "ep01", "common_name": "Açaí solteiro", "scientific_name": "Euterpe precatoria Mart."},
5 | {"species_id": "ep35", "common_name": "Açaí solteiro produtivo", "scientific_name": null}
6 | ],
7 | "Castanheira": [
8 | {"species_id": "be03", "common_name": "Castanheira", "scientific_name": "Bertholletia excelsa Bonpl."}
9 | ],
10 | "Palmeiras": [
11 | {"species_id": "se04", "common_name": "Paxiúba", "scientific_name": "Socratea exorrhiza (Mart.) H. Wendl."},
12 | {"species_id": "mf09", "common_name": "Burití", "scientific_name": "Mauritia flexuosa L.f."},
13 | {"species_id": "ab10", "common_name": "Jací", "scientific_name": "Attalea butyracea (Mutis ex Lf) Wess.Boer"},
14 | {"species_id": "ap12", "common_name": "Ouricuri", "scientific_name": "Attalea phalerata Mart. ex Spreng."},
15 | {"species_id": "au13", "common_name": "Murumuru", "scientific_name": "Astrocaryum ulei Burret"},
16 | {"species_id": "aa14", "common_name": "Tucumã", "scientific_name": "Astrocaryum aculeatum G.Mey."},
17 | {"species_id": "am16", "common_name": "Inajá", "scientific_name": "Attalea maripa (Aubl.) Mart."},
18 | {"species_id": "ob19", "common_name": "Patauá", "scientific_name": "Oenocarpus bataua Mart."},
19 | {"species_id": "ep35", "common_name": "Açaí solteiro produtivo", "scientific_name": null},
20 | {"species_id": "at34", "common_name": "Cocão", "scientific_name": "Attalea tessmannii Burret"}
21 | ],
22 | "PFNMs": [
23 | {"species_id": "al30", "common_name": "Garapeira", "scientific_name": "Apuleia leiocarpa (Vogel) JFMacbr."},
24 | {"species_id": "ba24", "common_name": "Manite", "scientific_name": "Brosimum alicastrum Sw."},
25 | {"species_id": "be03", "common_name": "Castanheira", "scientific_name": "Bertholletia excelsa Bonpl."},
26 | {"species_id": "bj11", "common_name": "Bajão", "scientific_name": "Parkia paraensis Ducke"},
27 | {"species_id": "cm29", "common_name": "Copaíba", "scientific_name": "Copaifera multijuga Hayne"},
28 | {"species_id": "cm40", "common_name": "Tauari", "scientific_name": "Couratari macrosperma A.C.Sm."},
29 | {"species_id": "co06", "common_name": "Cedro", "scientific_name": "Cedrela odorata L."},
30 | {"species_id": "cp15", "common_name": "Samaúma", "scientific_name": "Ceiba pentandra (L.) Gaertn."},
31 | {"species_id": "cs21", "common_name": "Samauma preta", "scientific_name": "Ceiba samauma (Mart. & Zucc.) K.Schum."},
32 | {"species_id": "cs36", "common_name": "Samaúma barriguda", "scientific_name": "Ceiba speciosa (A.St.-Hil.) Ravenna"},
33 | {"species_id": "cu05", "common_name": "Caucho", "scientific_name": "Castilla Ulei Warb."},
34 | {"species_id": "do22", "common_name": "Cumaru ferro", "scientific_name": "Dipteryx odorata (Aubl.) Willd."},
35 | {"species_id": "es00", "common_name": "Orelha de macaco", "scientific_name": "Enterolobium schomburqki"},
36 | {"species_id": "ev73", "common_name": "Louro abacate", "scientific_name": "Endlicheria verticillata"},
37 | {"species_id": "ho45", "common_name": "Jutai", "scientific_name": "Hymenaea oblongifolia"},
38 | {"species_id": "hy47", "common_name": "Angelin", "scientific_name": null},
39 | {"species_id": "mh28", "common_name": "Maçaranduba", "scientific_name": "Manilkara huberi (Ducke) Standl."},
40 | {"species_id": "mv25", "common_name": "Abiu rosa", "scientific_name": "Micropholis venulosa (Mart. & Eichler ex Miq.) Pierre"},
41 | {"species_id": "pg25", "common_name": "Carapanauba", "scientific_name": "Peltogyne sp."},
42 | {"species_id": "sa08", "common_name": "Pinho cuiabano", "scientific_name": "Schizolobium amazonicum Ducke"},
43 | {"species_id": "sa31", "common_name": "Angico", "scientific_name": "Parkia nitida Miq."},
44 | {"species_id": "ss44", "common_name": "Taxi-vermelho AC", "scientific_name": null},
45 | {"species_id": "tm57", "common_name": "Taxi preto", "scientific_name": "Tachigali myrmecophila"}
46 | ],
47 | "PFMS": [
48 | {"species_id": "sg37", "common_name": "Baginha", "scientific_name": "Stryphnodendron guianense (Aubl.) Benth."},
49 | {"species_id": "ap39", "common_name": "Espinheiro preto", "scientific_name": "Acacia polyphylla DC."},
50 | {"species_id": "cu05", "common_name": "Caucho RO", "scientific_name": null},
51 | {"species_id": "co06", "common_name": "Cedro", "scientific_name": "Cedrela odorata L."},
52 | {"species_id": "do22", "common_name": "Cumaru ferro", "scientific_name": "Dipteryx odorata (Aubl.) Willd."},
53 | {"species_id": "ba24", "common_name": "Manite", "scientific_name": "Brosimum alicastrum Sw."},
54 | {"species_id": "mv25", "common_name": "Abiu rosa", "scientific_name": "Micropholis venulosa (Mart. & Eichler ex Miq.) Pierre"},
55 | {"species_id": "hc27", "common_name": "Assacú", "scientific_name": "Hura crepitans L."},
56 | {"species_id": "mh29", "common_name": "Maçaranduba", "scientific_name": "Manilkara huberi (Ducke) Standl."},
57 | {"species_id": "al30", "common_name": "Garapeira", "scientific_name": "Apuleia leiocarpa (Vogel) JFMacbr."},
58 | {"species_id": "ec33", "common_name": "Castanharana vermelha", "scientific_name": "Eschweilera coriacea (DC.) SAMori"},
59 | {"species_id": "cm40", "common_name": "Tauari", "scientific_name": "Couratari macrosperma A.C.Sm."},
60 | {"species_id": "hc41", "common_name": "Jatobá", "scientific_name": "Hymenaea courbaril L."},
61 | {"species_id": "sp42", "common_name": "Taxi-branco RO", "scientific_name": "Sclerolobium paniculatum Vogel"},
62 | {"species_id": "de43", "common_name": "Faveira-ferro RO", "scientific_name": "Dinizia excelsa Ducke"}
63 | ],
64 | "Ecológico": [
65 | {"species_id": "am02", "common_name": "Árvore morta", "scientific_name": null},
66 | {"species_id": "fe17", "common_name": "Fenologia", "scientific_name": null},
67 | {"species_id": "cs18", "common_name": "Cecrópia", "scientific_name": null}
68 | ],
69 | "Ambiental": [
70 | {"species_id": "tr23", "common_name": "Toras", "scientific_name": null},
71 | {"species_id": "cl07", "common_name": "Clareira", "scientific_name": null},
72 | {"species_id": "ex29", "common_name": "Exploração", "scientific_name": "Madeira serrada"}
73 | ]
74 | }
75 | }
76 |
--------------------------------------------------------------------------------
/json/groups.json:
--------------------------------------------------------------------------------
1 |
2 | {
3 | "species_dict":{
4 | "es00": {
5 | "common_name": "Orelha de macaco",
6 | "scientific_name": "Enterolobium schomburqki"
7 | },
8 | "ep01": {
9 | "common_name": "Açaí solteiro",
10 | "scientific_name": "Euterpe precatoria Mart."
11 | },
12 | "am02": {
13 | "common_name": "Árvore morta",
14 | "scientific_name": null
15 | },
16 | "be03": {
17 | "common_name": "Castanheira",
18 | "scientific_name": "Bertholletia excelsa Bonpl."
19 | },
20 | "se04": {
21 | "common_name": "Paxiúba",
22 | "scientific_name": "Socratea exorrhiza (Mart.) H. Wendl."
23 | },
24 | "cu05": {
25 | "common_name": "Caucho",
26 | "scientific_name": "Castilla Ulei Warb."
27 | },
28 | "co06": {
29 | "common_name": "Cedro",
30 | "scientific_name": "Cedrela odorata L."
31 | },
32 | "cl07": {
33 | "common_name": "Clareira",
34 | "scientific_name": null
35 | },
36 | "sa08": {
37 | "common_name": "Pinho cuiabano",
38 | "scientific_name": "Schizolobium amazonicum Ducke"
39 | },
40 | "sa31": {
41 | "common_name": "Pinho cuiabano Florado",
42 | "scientific_name": "Schizolobium amazonicum Ducke"
43 | },
44 |
45 | "mf09": {
46 | "common_name": "Burití",
47 | "scientific_name": "Mauritia flexuosa L.f."
48 | },
49 | "ab10": {
50 | "common_name": "Jací",
51 | "scientific_name": "Attalea butyracea (Mutis ex Lf) Wess.Boer"
52 | },
53 | "bj11": {
54 | "common_name": "Bajão",
55 | "scientific_name": "Parkia paraensis Ducke"
56 | },
57 | "ap12": {
58 | "common_name": "Ouricuri",
59 | "scientific_name": "Attalea phalerata Mart. ex Spreng."
60 | },
61 | "au13": {
62 | "common_name": "Murumuru",
63 | "scientific_name": "Astrocaryum ulei Burret"
64 | },
65 | "aa14": {
66 | "common_name": "Tucumã",
67 | "scientific_name": "Astrocaryum aculeatum G.Mey."
68 | },
69 | "cp15": {
70 | "common_name": "Samaúma",
71 | "scientific_name": "Ceiba pentandra (L.) Gaertn."
72 | },
73 | "am16": {
74 | "common_name": "Inajá",
75 | "scientific_name": "Attalea maripa (Aubl.) Mart."
76 | },
77 | "fe17": {
78 | "common_name": "Fenologia",
79 | "scientific_name": null
80 | },
81 | "cs18": {
82 | "common_name": "Cecrópia",
83 | "scientific_name": null
84 | },
85 | "ob19": {
86 | "common_name": "Patauá",
87 | "scientific_name": "Oenocarpus bataua Mart."
88 | },
89 | "ex20": {
90 | "common_name": "Exploração",
91 | "scientific_name": "Madeira serrada"
92 | },
93 | "cs20": {
94 | "common_name": "Samauma preta",
95 | "scientific_name": "Ceiba samauma (Mart. & Zucc.) K.Schum."
96 | },
97 | "do22": {
98 | "common_name": "Cumaru ferro",
99 | "scientific_name": "Dipteryx odorata (Aubl.) Willd."
100 | },
101 | "tr23": {
102 | "common_name": "Toras",
103 | "scientific_name": null
104 | },
105 | "ba24": {
106 | "common_name": "Manite",
107 | "scientific_name": "Brosimum alicastrum Sw."
108 | },
109 | "mv25": {
110 | "common_name": "Abiu rosa",
111 | "scientific_name": "Micropholis venulosa (Mart. & Eichler ex Miq.) Pierre"
112 | },
113 | "fs26": {
114 | "common_name": "Ficus",
115 | "scientific_name": "Ficus maxima Mill."
116 | },
117 | "hc27": {
118 | "common_name": "Assacú",
119 | "scientific_name": "Hura crepitans L."
120 | },
121 | "mh29": {
122 | "common_name": "Maçaranduba",
123 | "scientific_name": "Manilkara huberi (Ducke) Standl."
124 | },
125 | "cm29": {
126 | "common_name": "Copaíba",
127 | "scientific_name": "Copaifera multijuga Hayne"
128 | },
129 | "al30": {
130 | "common_name": "Garapeira",
131 | "scientific_name": "Apuleia leiocarpa (Vogel) JFMacbr."
132 | },
133 | "sa31f": {
134 | "common_name": "Angico",
135 | "scientific_name": "Parkia nitida Miq."
136 | },
137 | "ec33": {
138 | "common_name": "Castanharana vermelha",
139 | "scientific_name": "Eschweilera coriacea (DC.) SAMori"
140 | },
141 | "at34": {
142 | "common_name": "Cocão",
143 | "scientific_name": "Attalea tessmannii Burret"
144 | },
145 | "ep35": {
146 | "common_name": "Açaí solteiro produtivo",
147 | "scientific_name": null
148 | },
149 | "cs36": {
150 | "common_name": "Samaúma barriguda",
151 | "scientific_name": "Ceiba speciosa (A.St.-Hil.) Ravenna"
152 | },
153 | "sg37": {
154 | "common_name": "Baginha",
155 | "scientific_name": "Stryphnodendron guianense (Aubl.) Benth."
156 | },
157 | "fm38": {
158 | "common_name": "Caxinguba",
159 | "scientific_name": "Ficus maxima Mill."
160 | },
161 | "ap39": {
162 | "common_name": "Espinheiro preto",
163 | "scientific_name": "Acacia polyphylla DC."
164 | },
165 | "cm40": {
166 | "common_name": "Tauari",
167 | "scientific_name": "Couratari macrosperma A.C.Sm."
168 | },
169 | "hc41": {
170 | "common_name": "Jatobá",
171 | "scientific_name": "Hymenaea courbaril L."
172 | },
173 | "sp42": {
174 | "common_name": "Taxi-branco RO",
175 | "scientific_name": "Sclerolobium paniculatum Vogel"
176 | },
177 | "de43": {
178 | "common_name": "Faveira-ferro RO",
179 | "scientific_name": "Dinizia excelsa Ducke"
180 | },
181 | "ss44": {
182 | "common_name": "Taxi-vermelho AC",
183 | "scientific_name": null
184 | },
185 | "hp45": {
186 | "common_name": "Jutai",
187 | "scientific_name": "Hymenaea oblongifolia"
188 | },
189 | "op46": {
190 | "common_name": "Algoodoeiro",
191 | "scientific_name": null
192 | },
193 | "hy47": {
194 | "common_name": "Angelin",
195 | "scientific_name": null
196 | },
197 | "as48": {
198 | "common_name": "Babaçu",
199 | "scientific_name": "Attalea speciosa"
200 | },
201 | "oc49": {
202 | "common_name": "Bajão RO",
203 | "scientific_name": "Ormosia coutinho"
204 | },
205 | "pp50": {
206 | "common_name": "Bandarra",
207 | "scientific_name": "Parkia paraensis"
208 | },
209 | "cu51": {
210 | "common_name": "Caucho RO",
211 | "scientific_name": null
212 | },
213 | "pm52": {
214 | "common_name": "Fava arara",
215 | "scientific_name": "Parkia multijuga"
216 | },
217 | "cm53": {
218 | "common_name": "Jequitiba carvão",
219 | "scientific_name": "Cariniana micrantha"
220 | },
221 | "bh54": {
222 | "common_name": "Mirindiba",
223 | "scientific_name": "Buchenavia huberi"
224 | },
225 | "cg56": {
226 | "common_name": "Pequirana",
227 | "scientific_name": "Caryocar glabrum"
228 | },
229 | "tm57": {
230 | "common_name": "Taxi preto",
231 | "scientific_name": "Tachigali myrmecophila"
232 | },
233 | "sm58": {
234 | "common_name": "Caja",
235 | "scientific_name": "Spondias mombin L"
236 | },
237 | "lp60": {
238 | "common_name": "Pau jacaré",
239 | "scientific_name": "Laetia procera (Poepp.) Eichler"
240 | },
241 | "sm61": {
242 | "common_name": "Mogno",
243 | "scientific_name": "Swetenia macrophylla King."
244 | },
245 | "ms62": {
246 | "common_name": "Banana",
247 | "scientific_name": "Musa sp."
248 | },
249 | "vf63": {
250 | "common_name": "Quaruba",
251 | "scientific_name": "Vochysia ferruginea Mart."
252 | },
253 | "hb64": {
254 | "common_name": "Seringueira",
255 | "scientific_name": "Hevea brasiliensis"
256 | },
257 | "cn65": {
258 | "common_name": "Coqueiro",
259 | "scientific_name": "Cocos nucifera L."
260 | },
261 | "tg66": {
262 | "common_name": "Cupuaçu",
263 | "scientific_name": "Theobroma grandiflorum"
264 | },
265 | "bg67": {
266 | "common_name": "Pupunha",
267 | "scientific_name": "Bactris gasipaes (Kunth)"
268 | },
269 | "av68": {
270 | "common_name": "Amarelão",
271 | "scientific_name": "Aspidosperma vargasi"
272 | },
273 | "ac69": {
274 | "common_name": "Canelão",
275 | "scientific_name": "Aniba canelilla"
276 | },
277 | "ob70": {
278 | "common_name": "Bacaba",
279 | "scientific_name": "Oenocarpus bacaba"
280 | },
281 | "cp72": {
282 | "common_name": "Embaúba branca",
283 | "scientific_name": "Cecropia pachystachya"
284 | },
285 | "ev73": {
286 | "common_name": "Louro abacate",
287 | "scientific_name": "Endlicheria verticillata"
288 | },
289 | "pd74": {
290 | "common_name": "Palmeira desconhecida",
291 | "scientific_name": null
292 | }
293 | },
294 |
295 | "categories": {
296 | "Açaí": [
297 | {"specie": "ep01", "class_id": 0},
298 | {"specie": "ep35", "class_id": 1}
299 | ],
300 | "Castanheira": [
301 | {"specie": "be03", "class_id": 0}
302 | ],
303 | "Palmeiras": [
304 | {"specie": "ep01", "class_id": 0},
305 | {"specie": "se04", "class_id": 1},
306 | {"specie": "mf09", "class_id": 2},
307 | {"specie": "ab10", "class_id": 3},
308 | {"specie": "ap12", "class_id": 4},
309 | {"specie": "au13", "class_id": 5},
310 | {"specie": "aa14", "class_id": 6},
311 | {"specie": "am16", "class_id": 7},
312 | {"specie": "ob19", "class_id": 8},
313 | {"specie": "ep35", "class_id": 9},
314 | {"specie": "at34", "class_id": 10}
315 | ],
316 | "PFNMs": [
317 | {"specie": "al30", "class_id": 0},
318 | {"specie": "ba24", "class_id": 1},
319 | {"specie": "be03", "class_id": 2},
320 | {"specie": "bj11", "class_id": 3},
321 | {"specie": "cm29", "class_id": 4},
322 | {"specie": "cm40", "class_id": 5},
323 | {"specie": "co06", "class_id": 6},
324 | {"specie": "cp15", "class_id": 7},
325 | {"specie": "cs21", "class_id": 8},
326 | {"specie": "cs36", "class_id": 9},
327 | {"specie": "cu05", "class_id": 10},
328 | {"specie": "do22", "class_id": 11},
329 | {"specie": "es00", "class_id": 12},
330 | {"specie": "ev73", "class_id": 13},
331 | {"specie": "ho45", "class_id": 14},
332 | {"specie": "hy47", "class_id": 15},
333 | {"specie": "mh28", "class_id": 16},
334 | {"specie": "mv25", "class_id": 17},
335 | {"specie": "pg25", "class_id": 18},
336 | {"specie": "sa08", "class_id": 19},
337 | {"specie": "sa31f", "class_id": 20},
338 | {"specie": "ss44", "class_id": 21},
339 | {"specie": "tm57", "class_id": 22}
340 | ],
341 | "PMFS": [
342 | {"specie": "es00", "class_id": 0},
343 | {"specie": "be03", "class_id": 1},
344 | {"specie": "cu05", "class_id": 2},
345 | {"specie": "co06", "class_id": 3},
346 | {"specie": "sa08", "class_id": 4},
347 | {"specie": "bj11", "class_id": 5},
348 | {"specie": "cp15", "class_id": 6},
349 | {"specie": "cs20", "class_id": 7},
350 | {"specie": "do22", "class_id": 8},
351 | {"specie": "ba24", "class_id": 9},
352 | {"specie": "mv25", "class_id": 10},
353 | {"specie": "mh29", "class_id": 11},
354 | {"specie": "cm29", "class_id": 12},
355 | {"specie": "al30", "class_id": 13},
356 | {"specie": "sa31", "class_id": 14},
357 | {"specie": "cs36", "class_id": 15},
358 | {"specie": "cm40", "class_id": 16},
359 | {"specie": "ss44", "class_id": 17},
360 | {"specie": "hp45", "class_id": 18},
361 | {"specie": "hy47", "class_id": 19},
362 | {"specie": "cu51", "class_id": 20},
363 | {"specie": "sp42", "class_id": 21},
364 | {"specie": "ev73", "class_id": 22}
365 | ],
366 | "Ecológico": [
367 | {"specie": "am02", "class_id": 0},
368 | {"specie": "fe17", "class_id": 1},
369 | {"specie": "cs18", "class_id": 2}
370 | ],
371 | "Ambiental": [
372 | {"specie": "tr23", "class_id": 0},
373 | {"specie": "cl07", "class_id": 1},
374 | {"specie": "ex29", "class_id": 2}
375 | ]
376 | }
377 | }
378 |
--------------------------------------------------------------------------------
/json/species_data.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "species_id": "es00",
4 | "common_name": "Orelha de macaco",
5 | "scientific_name": "Enterolobium schomburqki",
6 | "category": null
7 | },
8 | {
9 | "species_id": "ep01",
10 | "common_name": "Açaí solteiro",
11 | "scientific_name": "Euterpe precatoria Mart.",
12 | "category": "Açaí"
13 | },
14 | {
15 | "species_id": "am02",
16 | "common_name": "Árvore morta",
17 | "scientific_name": null,
18 | "category": "Ecológico"
19 | },
20 | {
21 | "species_id": "be03",
22 | "common_name": "Castanheira",
23 | "scientific_name": "Bertholletia excelsa Bonpl.",
24 | "category": "Castanheira"
25 | },
26 | {
27 | "species_id": "se04",
28 | "common_name": "Paxiúba",
29 | "scientific_name": "Socratea exorrhiza (Mart.) H. Wendl.",
30 | "category": "Palmeiras"
31 | },
32 | {
33 | "species_id": "cu05",
34 | "common_name": "Caucho",
35 | "scientific_name": "Castilla Ulei Warb.",
36 | "category": "PFNMs"
37 | },
38 | {
39 | "species_id": "co06",
40 | "common_name": "Cedro",
41 | "scientific_name": "Cedrela odorata L.",
42 | "category": "PFNMs"
43 | },
44 | {
45 | "species_id": "cl07",
46 | "common_name": "Clareira",
47 | "scientific_name": null,
48 | "category": "Ambiental"
49 | },
50 | {
51 | "species_id": "sa08",
52 | "common_name": "Pinho cuiabano",
53 | "scientific_name": "Schizolobium amazonicum Ducke",
54 | "category": "Palmeiras"
55 | },
56 | {
57 | "species_id": "mf09",
58 | "common_name": "Burití",
59 | "scientific_name": "Mauritia flexuosa L.f.",
60 | "category": "Palmeiras"
61 | },
62 | {
63 | "species_id": "ab10",
64 | "common_name": "Jací",
65 | "scientific_name": "Attalea butyracea (Mutis ex Lf) Wess.Boer",
66 | "category": "Palmeiras"
67 | },
68 | {
69 | "species_id": "bj11",
70 | "common_name": "Bajão",
71 | "scientific_name": "Parkia paraensis Ducke",
72 | "category": "PFNMs"
73 | },
74 | {
75 | "species_id": "ap12",
76 | "common_name": "Ouricuri",
77 | "scientific_name": "Attalea phalerata Mart. ex Spreng.",
78 | "category": "Palmeiras"
79 | },
80 | {
81 | "species_id": "au13",
82 | "common_name": "Murumuru",
83 | "scientific_name": "Astrocaryum ulei Burret",
84 | "category": "Palmeiras"
85 | },
86 | {
87 | "species_id": "aa14",
88 | "common_name": "Tucumã",
89 | "scientific_name": "Astrocaryum aculeatum G.Mey.",
90 | "category": "Palmeiras"
91 | },
92 | {
93 | "species_id": "cp15",
94 | "common_name": "Samaúma",
95 | "scientific_name": "Ceiba pentandra (L.) Gaertn.",
96 | "category": "PFNMs"
97 | },
98 | {
99 | "species_id": "am16",
100 | "common_name": "Inajá",
101 | "scientific_name": "Attalea maripa (Aubl.) Mart.",
102 | "category": "Palmeiras"
103 | },
104 | {
105 | "species_id": "fe17",
106 | "common_name": "Fenologia",
107 | "scientific_name": null,
108 | "category": "Ecológico"
109 | },
110 | {
111 | "species_id": "cs18",
112 | "common_name": "Cecrópia",
113 | "scientific_name": null,
114 | "category": "Ecológico"
115 | },
116 | {
117 | "species_id": "ob19",
118 | "common_name": "Patauá",
119 | "scientific_name": "Oenocarpus bataua Mart.",
120 | "category": "Palmeiras"
121 | },
122 | {
123 | "species_id": "ex20",
124 | "common_name": "Exploração",
125 | "scientific_name": "Madeira serrada",
126 | "category": "Ambiental"
127 | },
128 | {
129 | "species_id": "cs20",
130 | "common_name": "Samauma preta",
131 | "scientific_name": "Ceiba samauma (Mart. & Zucc.) K.Schum.",
132 | "category": null
133 | },
134 | {
135 | "species_id": "do22",
136 | "common_name": "Cumaru ferro",
137 | "scientific_name": "Dipteryx odorata (Aubl.) Willd.",
138 | "category": "PFNMs"
139 | },
140 | {
141 | "species_id": "tr23",
142 | "common_name": "Toras",
143 | "scientific_name": null,
144 | "category": "Ambiental"
145 | },
146 | {
147 | "species_id": "ba24",
148 | "common_name": "Manite",
149 | "scientific_name": "Brosimum alicastrum Sw.",
150 | "category": "PFNMs"
151 | },
152 | {
153 | "species_id": "mv25",
154 | "common_name": "Abiu rosa",
155 | "scientific_name": "Micropholis venulosa (Mart. & Eichler ex Miq.) Pierre",
156 | "category": "PFNMs"
157 | },
158 | {
159 | "species_id": "fs26",
160 | "common_name": "Ficus",
161 | "scientific_name": "Ficus maxima Mill.",
162 | "category": null
163 | },
164 | {
165 | "species_id": "hc27",
166 | "common_name": "Assacú",
167 | "scientific_name": "Hura crepitans L.",
168 | "category": "PFNMs"
169 | },
170 | {
171 | "species_id": "mh29",
172 | "common_name": "Maçaranduba",
173 | "scientific_name": "Manilkara huberi (Ducke) Standl.",
174 | "category": "PFNMs"
175 | },
176 | {
177 | "species_id": "cm29",
178 | "common_name": "Copaíba",
179 | "scientific_name": "Copaifera multijuga Hayne",
180 | "category": "PFNMs"
181 | },
182 | {
183 | "species_id": "al30",
184 | "common_name": "Garapeira",
185 | "scientific_name": "Apuleia leiocarpa (Vogel) JFMacbr.",
186 | "category": "PFNMs"
187 | },
188 | {
189 | "species_id": "sa31",
190 | "common_name": "Angico",
191 | "scientific_name": "Parkia nitida Miq.",
192 | "category": "PFNMs"
193 | },
194 | {
195 | "species_id": "ec33",
196 | "common_name": "Castanharana vermelha",
197 | "scientific_name": "Eschweilera coriacea (DC.) SAMori",
198 | "category": "PFNMs"
199 | },
200 | {
201 | "species_id": "at34",
202 | "common_name": "Cocão",
203 | "scientific_name": "Attalea tessmannii Burret",
204 | "category": "Palmeiras"
205 | },
206 | {
207 | "species_id": "ep35",
208 | "common_name": "Açaí solteiro produtivo",
209 | "scientific_name": null,
210 | "category": "Açaí"
211 | },
212 | {
213 | "species_id": "cs36",
214 | "common_name": "Samaúma barriguda",
215 | "scientific_name": "Ceiba speciosa (A.St.-Hil.) Ravenna",
216 | "category": "PFNMs"
217 | },
218 | {
219 | "species_id": "sg37",
220 | "common_name": "Baginha",
221 | "scientific_name": "Stryphnodendron guianense (Aubl.) Benth.",
222 | "category": "PFMS"
223 | },
224 | {
225 | "species_id": "fm38",
226 | "common_name": "Caxinguba",
227 | "scientific_name": "Ficus maxima Mill.",
228 | "category": null
229 | },
230 | {
231 | "species_id": "ap39",
232 | "common_name": "Espinheiro preto",
233 | "scientific_name": "Acacia polyphylla DC.",
234 | "category": "PFMS"
235 | },
236 | {
237 | "species_id": "cm40",
238 | "common_name": "Tauari",
239 | "scientific_name": "Couratari macrosperma A.C.Sm.",
240 | "category": "PFNMs"
241 | },
242 | {
243 | "species_id": "hc41",
244 | "common_name": "Jatobá",
245 | "scientific_name": "Hymenaea courbaril L.",
246 | "category": "PFNMs"
247 | },
248 | {
249 | "species_id": "sp42",
250 | "common_name": "Taxi-branco RO",
251 | "scientific_name": "Sclerolobium paniculatum Vogel",
252 | "category": "PFNMs"
253 | },
254 | {
255 | "species_id": "de43",
256 | "common_name": "Faveira-ferro RO",
257 | "scientific_name": "Dinizia excelsa Ducke",
258 | "category": "PFMS"
259 | },
260 | {
261 | "species_id": "ss44",
262 | "common_name": "Taxi-vermelho AC",
263 | "scientific_name": null,
264 | "category": "Ambiental"
265 | },
266 | {
267 | "species_id": "hp45",
268 | "common_name": "Jutai",
269 | "scientific_name": "Hymenaea oblongifolia",
270 | "category": "PFMS"
271 | },
272 | {
273 | "species_id": "op46",
274 | "common_name": "Algoodoeiro",
275 | "scientific_name": null,
276 | "category": null
277 | },
278 | {
279 | "species_id": "hy47",
280 | "common_name": "Angelin",
281 | "scientific_name": null,
282 | "category": "Ambiental"
283 | },
284 | {
285 | "species_id": "as48",
286 | "common_name": "Babaçu",
287 | "scientific_name": "Attalea speciosa",
288 | "category": "PFMS"
289 | },
290 | {
291 | "species_id": "oc49",
292 | "common_name": "Bajão RO",
293 | "scientific_name": "Ormosia coutinho",
294 | "category": "PFNMs"
295 | },
296 | {
297 | "species_id": "pp50",
298 | "common_name": "Bandarra",
299 | "scientific_name": "Parkia paraensis",
300 | "category": "PFNMs"
301 | },
302 | {
303 | "species_id": "cu51",
304 | "common_name": "Caucho RO",
305 | "scientific_name": null,
306 | "category": "PFNMs"
307 | },
308 | {
309 | "species_id": "pm52",
310 | "common_name": "Fava arara",
311 | "scientific_name": "Parkia multijuga",
312 | "category": "PFNMs"
313 | },
314 | {
315 | "species_id": "cm53",
316 | "common_name": "Jequitiba carvão",
317 | "scientific_name": "Cariniana micrantha",
318 | "category": "PFNMs"
319 | },
320 | {
321 | "species_id": "bh54",
322 | "common_name": "Mirindiba",
323 | "scientific_name": "Buchenavia huberi",
324 | "category": "PFNMs"
325 | },
326 | {
327 | "species_id": "cg56",
328 | "common_name": "Pequirana",
329 | "scientific_name": "Caryocar glabrum",
330 | "category": "PFNMs"
331 | },
332 | {
333 | "species_id": "tm57",
334 | "common_name": "Taxi preto",
335 | "scientific_name": "Tachigali myrmecophila",
336 | "category": "PFNMs"
337 | },
338 | {
339 | "species_id": "sm58",
340 | "common_name": "Caja",
341 | "scientific_name": "Spondias mombin L",
342 | "category": null
343 | },
344 | {
345 | "species_id": "lp60",
346 | "common_name": "Pau jacaré",
347 | "scientific_name": "Laetia procera (Poepp.) Eichler",
348 | "category": null
349 | },
350 | {
351 | "species_id": "sm61",
352 | "common_name": "Mogno",
353 | "scientific_name": "Swetenia macrophylla King.",
354 | "category": null
355 | },
356 | {
357 | "species_id": "ms62",
358 | "common_name": "Banana",
359 | "scientific_name": "Musa sp.",
360 | "category": null
361 | },
362 | {
363 | "species_id": "vf63",
364 | "common_name": "Quaruba",
365 | "scientific_name": "Vochysia ferruginea Mart.",
366 | "category": null
367 | },
368 | {
369 | "species_id": "hb64",
370 | "common_name": "Seringueira",
371 | "scientific_name": "Hevea brasiliensis",
372 | "category": null
373 | },
374 | {
375 | "species_id": "cn65",
376 | "common_name": "Coqueiro",
377 | "scientific_name": "Cocos nucifera L.",
378 | "category": null
379 | },
380 | {
381 | "species_id": "tg66",
382 | "common_name": "Cupuaçu",
383 | "scientific_name": "Theobroma grandiflorum",
384 | "category": null
385 | },
386 | {
387 | "species_id": "bg67",
388 | "common_name": "Pupunha",
389 | "scientific_name": "Bactris gasipaes (Kunth)",
390 | "category": null
391 | },
392 | {
393 | "species_id": "av68",
394 | "common_name": "Amarelão",
395 | "scientific_name": "Aspidosperma vargasi",
396 | "category": null
397 | },
398 | {
399 | "species_id": "ac69",
400 | "common_name": "Canelão",
401 | "scientific_name": "Aniba canelilla",
402 | "category": null
403 | },
404 | {
405 | "species_id": "ob70",
406 | "common_name": "Bacaba",
407 | "scientific_name": "Oenocarpus bacaba",
408 | "category": null
409 | },
410 | {
411 | "species_id": "cp72",
412 | "common_name": "Embaúba branca",
413 | "scientific_name": "Cecropia pachystachya",
414 | "category": null
415 | },
416 | {
417 | "species_id": "ev73",
418 | "common_name": "Louro abacate",
419 | "scientific_name": "Endlicheria verticillata",
420 | "category": null
421 | },
422 | {
423 | "species_id": "pd74",
424 | "common_name": "Palmeira desconhecida",
425 | "scientific_name": null,
426 | "category": null
427 | }
428 | ]
429 |
--------------------------------------------------------------------------------
/logo/Embrapa-Acre.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/logo/Embrapa-Acre.png
--------------------------------------------------------------------------------
/logo/Fundo-JBS.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/logo/Fundo-JBS.png
--------------------------------------------------------------------------------
/logo/Netflora.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/logo/Netflora.png
--------------------------------------------------------------------------------
/metrics/ACAI_Embrapa00_confusion_matrix_thresh0.25.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/metrics/ACAI_Embrapa00_confusion_matrix_thresh0.25.png
--------------------------------------------------------------------------------
/metrics/PALMEIRAS_Embrapa00_confusion_matrix_thresh0.25.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/metrics/PALMEIRAS_Embrapa00_confusion_matrix_thresh0.25.png
--------------------------------------------------------------------------------
/metrics/PMFS_Embrapa00_confusion_matrix_thresh0.25.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NetFlora/Netflora/6aaaa1b2285025083dc8926eb4c0d947911a25e5/metrics/PMFS_Embrapa00_confusion_matrix_thresh0.25.jpeg
--------------------------------------------------------------------------------
/models/__init__.py:
--------------------------------------------------------------------------------
1 | # init
--------------------------------------------------------------------------------
/models/experimental.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: experimental.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 |
9 | import numpy as np
10 | import random
11 | import torch
12 | import torch.nn as nn
13 |
14 | from models.common import Conv, DWConv
15 | from utils.google_utils import attempt_download
16 |
17 |
18 | class CrossConv(nn.Module):
19 | # Cross Convolution Downsample
20 | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
21 | # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
22 | super(CrossConv, self).__init__()
23 | c_ = int(c2 * e) # hidden channels
24 | self.cv1 = Conv(c1, c_, (1, k), (1, s))
25 | self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
26 | self.add = shortcut and c1 == c2
27 |
28 | def forward(self, x):
29 | return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
30 |
31 |
32 | class Sum(nn.Module):
33 | # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
34 | def __init__(self, n, weight=False): # n: number of inputs
35 | super(Sum, self).__init__()
36 | self.weight = weight # apply weights boolean
37 | self.iter = range(n - 1) # iter object
38 | if weight:
39 | self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
40 |
41 | def forward(self, x):
42 | y = x[0] # no weight
43 | if self.weight:
44 | w = torch.sigmoid(self.w) * 2
45 | for i in self.iter:
46 | y = y + x[i + 1] * w[i]
47 | else:
48 | for i in self.iter:
49 | y = y + x[i + 1]
50 | return y
51 |
52 |
53 | class MixConv2d(nn.Module):
54 | # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
55 | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
56 | super(MixConv2d, self).__init__()
57 | groups = len(k)
58 | if equal_ch: # equal c_ per group
59 | i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
60 | c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
61 | else: # equal weight.numel() per group
62 | b = [c2] + [0] * groups
63 | a = np.eye(groups + 1, groups, k=-1)
64 | a -= np.roll(a, 1, axis=1)
65 | a *= np.array(k) ** 2
66 | a[0] = 1
67 | c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
68 |
69 | self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
70 | self.bn = nn.BatchNorm2d(c2)
71 | self.act = nn.LeakyReLU(0.1, inplace=True)
72 |
73 | def forward(self, x):
74 | return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
75 |
76 |
77 | class Ensemble(nn.ModuleList):
78 | # Ensemble of models
79 | def __init__(self):
80 | super(Ensemble, self).__init__()
81 |
82 | def forward(self, x, augment=False):
83 | y = []
84 | for module in self:
85 | y.append(module(x, augment)[0])
86 | # y = torch.stack(y).max(0)[0] # max ensemble
87 | # y = torch.stack(y).mean(0) # mean ensemble
88 | y = torch.cat(y, 1) # nms ensemble
89 | return y, None # inference, train output
90 |
91 |
92 |
93 |
94 |
95 | class ORT_NMS(torch.autograd.Function):
96 | '''ONNX-Runtime NMS operation'''
97 | @staticmethod
98 | def forward(ctx,
99 | boxes,
100 | scores,
101 | max_output_boxes_per_class=torch.tensor([100]),
102 | iou_threshold=torch.tensor([0.45]),
103 | score_threshold=torch.tensor([0.25])):
104 | device = boxes.device
105 | batch = scores.shape[0]
106 | num_det = random.randint(0, 100)
107 | batches = torch.randint(0, batch, (num_det,)).sort()[0].to(device)
108 | idxs = torch.arange(100, 100 + num_det).to(device)
109 | zeros = torch.zeros((num_det,), dtype=torch.int64).to(device)
110 | selected_indices = torch.cat([batches[None], zeros[None], idxs[None]], 0).T.contiguous()
111 | selected_indices = selected_indices.to(torch.int64)
112 | return selected_indices
113 |
114 | @staticmethod
115 | def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold):
116 | return g.op("NonMaxSuppression", boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold)
117 |
118 |
119 | class TRT_NMS(torch.autograd.Function):
120 | '''TensorRT NMS operation'''
121 | @staticmethod
122 | def forward(
123 | ctx,
124 | boxes,
125 | scores,
126 | background_class=-1,
127 | box_coding=1,
128 | iou_threshold=0.45,
129 | max_output_boxes=100,
130 | plugin_version="1",
131 | score_activation=0,
132 | score_threshold=0.25,
133 | ):
134 | batch_size, num_boxes, num_classes = scores.shape
135 | num_det = torch.randint(0, max_output_boxes, (batch_size, 1), dtype=torch.int32)
136 | det_boxes = torch.randn(batch_size, max_output_boxes, 4)
137 | det_scores = torch.randn(batch_size, max_output_boxes)
138 | det_classes = torch.randint(0, num_classes, (batch_size, max_output_boxes), dtype=torch.int32)
139 | return num_det, det_boxes, det_scores, det_classes
140 |
141 | @staticmethod
142 | def symbolic(g,
143 | boxes,
144 | scores,
145 | background_class=-1,
146 | box_coding=1,
147 | iou_threshold=0.45,
148 | max_output_boxes=100,
149 | plugin_version="1",
150 | score_activation=0,
151 | score_threshold=0.25):
152 | out = g.op("TRT::EfficientNMS_TRT",
153 | boxes,
154 | scores,
155 | background_class_i=background_class,
156 | box_coding_i=box_coding,
157 | iou_threshold_f=iou_threshold,
158 | max_output_boxes_i=max_output_boxes,
159 | plugin_version_s=plugin_version,
160 | score_activation_i=score_activation,
161 | score_threshold_f=score_threshold,
162 | outputs=4)
163 | nums, boxes, scores, classes = out
164 | return nums, boxes, scores, classes
165 |
166 |
167 | class ONNX_ORT(nn.Module):
168 | '''onnx module with ONNX-Runtime NMS operation.'''
169 | def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=640, device=None, n_classes=80):
170 | super().__init__()
171 | self.device = device if device else torch.device("cpu")
172 | self.max_obj = torch.tensor([max_obj]).to(device)
173 | self.iou_threshold = torch.tensor([iou_thres]).to(device)
174 | self.score_threshold = torch.tensor([score_thres]).to(device)
175 | self.max_wh = max_wh # if max_wh != 0 : non-agnostic else : agnostic
176 | self.convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
177 | dtype=torch.float32,
178 | device=self.device)
179 | self.n_classes=n_classes
180 |
181 | def forward(self, x):
182 | boxes = x[:, :, :4]
183 | conf = x[:, :, 4:5]
184 | scores = x[:, :, 5:]
185 | if self.n_classes == 1:
186 | scores = conf # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
187 | # so there is no need to multiplicate.
188 | else:
189 | scores *= conf # conf = obj_conf * cls_conf
190 | boxes @= self.convert_matrix
191 | max_score, category_id = scores.max(2, keepdim=True)
192 | dis = category_id.float() * self.max_wh
193 | nmsbox = boxes + dis
194 | max_score_tp = max_score.transpose(1, 2).contiguous()
195 | selected_indices = ORT_NMS.apply(nmsbox, max_score_tp, self.max_obj, self.iou_threshold, self.score_threshold)
196 | X, Y = selected_indices[:, 0], selected_indices[:, 2]
197 | selected_boxes = boxes[X, Y, :]
198 | selected_categories = category_id[X, Y, :].float()
199 | selected_scores = max_score[X, Y, :]
200 | X = X.unsqueeze(1).float()
201 | return torch.cat([X, selected_boxes, selected_categories, selected_scores], 1)
202 |
203 | class ONNX_TRT(nn.Module):
204 | '''onnx module with TensorRT NMS operation.'''
205 | def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None ,device=None, n_classes=80):
206 | super().__init__()
207 | assert max_wh is None
208 | self.device = device if device else torch.device('cpu')
209 | self.background_class = -1,
210 | self.box_coding = 1,
211 | self.iou_threshold = iou_thres
212 | self.max_obj = max_obj
213 | self.plugin_version = '1'
214 | self.score_activation = 0
215 | self.score_threshold = score_thres
216 | self.n_classes=n_classes
217 |
218 | def forward(self, x):
219 | boxes = x[:, :, :4]
220 | conf = x[:, :, 4:5]
221 | scores = x[:, :, 5:]
222 | if self.n_classes == 1:
223 | scores = conf # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
224 | # so there is no need to multiplicate.
225 | else:
226 | scores *= conf # conf = obj_conf * cls_conf
227 | num_det, det_boxes, det_scores, det_classes = TRT_NMS.apply(boxes, scores, self.background_class, self.box_coding,
228 | self.iou_threshold, self.max_obj,
229 | self.plugin_version, self.score_activation,
230 | self.score_threshold)
231 | return num_det, det_boxes, det_scores, det_classes
232 |
233 |
234 | class End2End(nn.Module):
235 | '''export onnx or tensorrt model with NMS operation.'''
236 | def __init__(self, model, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None, device=None, n_classes=80):
237 | super().__init__()
238 | device = device if device else torch.device('cpu')
239 | assert isinstance(max_wh,(int)) or max_wh is None
240 | self.model = model.to(device)
241 | self.model.model[-1].end2end = True
242 | self.patch_model = ONNX_TRT if max_wh is None else ONNX_ORT
243 | self.end2end = self.patch_model(max_obj, iou_thres, score_thres, max_wh, device, n_classes)
244 | self.end2end.eval()
245 |
246 | def forward(self, x):
247 | x = self.model(x)
248 | x = self.end2end(x)
249 | return x
250 |
251 |
252 |
253 |
254 |
255 | def attempt_load(weights, map_location=None):
256 | # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
257 | model = Ensemble()
258 | for w in weights if isinstance(weights, list) else [weights]:
259 | attempt_download(w)
260 | ckpt = torch.load(w, map_location=map_location, weights_only=False)
261 | model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
262 |
263 | # Compatibility updates
264 | for m in model.modules():
265 | if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
266 | m.inplace = True # pytorch 1.7.0 compatibility
267 | elif type(m) is nn.Upsample:
268 | m.recompute_scale_factor = None # torch 1.11.0 compatibility
269 | elif type(m) is Conv:
270 | m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
271 |
272 | if len(model) == 1:
273 | return model[-1] # return model
274 | else:
275 | print('Ensemble created with %s\n' % weights)
276 | for k in ['names', 'stride']:
277 | setattr(model, k, getattr(model[-1], k))
278 | return model # return ensemble
279 |
280 |
281 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | # Usage: pip install -r requirements.txt
2 |
3 | # Base ----------------------------------------
4 | matplotlib>=3.2.2
5 | numpy
6 | opencv-python>=4.1.1
7 | Pillow>=7.1.2
8 | PyYAML>=5.3.1
9 | requests>=2.23.0
10 | scipy>=1.4.1
11 | torch>=1.7.0,!=1.12.0
12 | torchvision>=0.8.1,!=0.13.0
13 | tqdm>=4.41.0
14 | protobuf>=5.29.1,<6.0.0
15 | rasterio
16 | geopandas
17 | folium
18 |
19 | # Logging -------------------------------------
20 | tensorboard>=2.4.1
21 | # wandb
22 |
23 | # Plotting ------------------------------------
24 | pandas>=1.1.4
25 | seaborn>=0.11.0
26 |
27 | # Export --------------------------------------
28 | # coremltools>=4.1 # CoreML export
29 | # onnx>=1.9.0 # ONNX export
30 | # onnx-simplifier>=0.3.6 # ONNX simplifier
31 | # scikit-learn==0.19.2 # CoreML quantization
32 | # tensorflow>=2.4.1 # TFLite export
33 | # tensorflowjs>=3.9.0 # TF.js export
34 | # openvino-dev # OpenVINO export
35 |
36 | # Extras --------------------------------------
37 | ipython # interactive notebook
38 | psutil # system utilization
39 | thop # FLOPs computation
40 | # albumentations>=1.0.3
41 | # pycocotools>=2.0 # COCO mAP
42 | # roboflow
43 |
--------------------------------------------------------------------------------
/results.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: results.py
4 | Origin: Netflora (https://github.com/NetFlora/Netflora)
5 |
6 | """
7 |
8 | import argparse
9 | import re
10 | import os
11 | import glob
12 | import shutil
13 | import json
14 | import pandas as pd
15 | import numpy as np
16 | import geopandas as gpd
17 | import matplotlib.pyplot as plt
18 | from shapely.geometry import box
19 | from pathlib import Path
20 | from IPython.display import Image, display
21 |
22 |
23 | with open('processing/variable.json', 'r', encoding='utf-8') as file:
24 | variables = json.load(file)
25 | crs = variables['crs']
26 | algorithm = variables['algorithm']
27 |
28 | with open('json/groups.json', 'r', encoding='utf-8') as file:
29 | data = json.load(file)
30 | species_dict = data['species_dict']
31 | categories = data['categories']
32 |
33 | coords = pd.read_csv("processing/tile_coords.csv")
34 | base_path = "runs/detect/"
35 | output_shapefile_directory = "results/shapefiles/"
36 | output_csv_directory = "results/csv/"
37 | output_dir = "results/"
38 |
39 | def map_species_names(df, species_dict):
40 | df['common_name'] = df['class_id'].map(lambda x: species_dict[x]['common_name'] if x in species_dict else 'Desconhecido')
41 | return df
42 |
43 | def filter_by_algorithm(df, algorithm, categories):
44 | if algorithm in categories:
45 | valid_species_codes = categories[algorithm]
46 | return df[df['class_id'].isin(valid_species_codes)]
47 | else:
48 | print(f"Algoritmo {algorithm} não encontrado nas categorias. Usando dados completos.")
49 | return df
50 |
51 | def get_latest_exp_directory(base_path):
52 |
53 | base_directory = Path(base_path)
54 | exp_directories = []
55 |
56 |
57 | for d in base_directory.iterdir():
58 | if d.is_dir() and re.match(r"^exp(\d+)?$", d.name):
59 | exp_directories.append(d)
60 |
61 | if exp_directories:
62 |
63 | exp_directories.sort(key=lambda x: int(re.findall(r'\d+', x.name)[0]) if re.findall(r'\d+', x.name) else 0)
64 | return exp_directories[-1]
65 | else:
66 | return None
67 |
68 | def check_and_create_dir(directory):
69 | if not os.path.exists(directory):
70 | os.makedirs(directory)
71 |
72 | def calculate_iou(bb1, bb2):
73 | intersection = bb1.intersection(bb2).area
74 | union = bb1.union(bb2).area
75 | iou = intersection / union
76 | return iou
77 |
78 | def calculate_score(bb, alpha):
79 | area_score = bb.area
80 | shape_score = bb.length
81 | score = (1 - alpha) * area_score + alpha * shape_score
82 | return score
83 |
84 | def apply_nms_with_score(gdf, iou_threshold, alpha):
85 | to_remove = []
86 | for i in range(len(gdf)):
87 | for j in range(i + 1, len(gdf)):
88 | bb1 = gdf['geometry'].iloc[i]
89 | bb2 = gdf['geometry'].iloc[j]
90 |
91 | iou = calculate_iou(bb1, bb2)
92 | if iou > iou_threshold:
93 | score_bb1 = calculate_score(bb1, alpha)
94 | score_bb2 = calculate_score(bb2, alpha)
95 |
96 | if score_bb1 > score_bb2:
97 | to_remove.append(j)
98 | else:
99 | to_remove.append(i)
100 |
101 | gdf_nms = gdf.drop(to_remove)
102 | return gdf_nms
103 |
104 | def filter_rectangular_bounding_boxes(gdf, min_aspect_ratio, max_aspect_ratio):
105 | rectangular_indices = []
106 |
107 | for i, row in gdf.iterrows():
108 | x1, y1, x2, y2 = row['geometry'].bounds
109 | width = x2 - x1
110 | height = y2 - y1
111 | aspect_ratio = height / width
112 |
113 | if min_aspect_ratio <= aspect_ratio <= max_aspect_ratio:
114 | rectangular_indices.append(i)
115 |
116 | gdf_filtered = gdf.loc[rectangular_indices].copy()
117 | return gdf_filtered
118 |
119 | def calculate_width_height(df):
120 | df['width_m'] = (df['bb_xmax'] - df['bb_xmin'])
121 | df['height_m'] = (df['bb_ymax'] - df['bb_ymin'])
122 | return df
123 |
124 | def plot_class_distribution(gdf_nms_final, output_dir, algorithm):
125 | colors = plt.cm.tab20(np.linspace(0, 1, len(gdf_nms_final['name'].unique())))
126 | class_counts = gdf_nms_final['name'].value_counts().sort_index()
127 | fig, ax = plt.subplots(figsize=(12, 8))
128 | bars = ax.bar(class_counts.index, class_counts.values, color=colors, edgecolor='grey')
129 |
130 | for bar, color in zip(bars, colors):
131 | height = bar.get_height()
132 | ax.annotate(f'{int(height)}',
133 | xy=(bar.get_x() + bar.get_width() / 2, height),
134 | xytext=(0, 3),
135 | textcoords="offset points",
136 | ha='center', va='bottom',
137 | color='black')
138 |
139 | plt.title(f'Distribuição de Frequência - {algorithm}', fontsize=18, fontweight='bold', color='black')
140 | plt.xlabel('Espécie', fontsize=14, fontweight='bold', color='black')
141 | plt.ylabel('Contagem', fontsize=14, fontweight='bold', color='black')
142 | plt.xticks(rotation=45, ha='right', fontsize=12, fontweight='normal', color='black')
143 | plt.yticks(fontsize=12, fontweight='bold', color='black')
144 | ax.set_facecolor('white')
145 | fig.set_facecolor('white')
146 | for spine in ax.spines.values():
147 | spine.set_visible(False)
148 | ax.yaxis.grid(True, linestyle='--', which='major', color='grey', alpha=0.3)
149 | ax.xaxis.set_tick_params(size=0)
150 | ax.yaxis.set_tick_params(size=0)
151 | plt.tight_layout()
152 | plt.savefig(f'{output_dir}frequencia_de_{algorithm}.png', dpi=300)
153 | plt.show()
154 |
155 | def showResults():
156 | output_dir = "results/"
157 | chart_name = f'frequencia_de_{algorithm}.png'
158 | if os.path.exists(f'{output_dir}{chart_name}'):
159 | display(Image(f'{output_dir}{chart_name}'))
160 | else:
161 | print(f"File not found: {output_dir}{chart_name}")
162 |
163 | def downloadResults():
164 | output_dir = "results/"
165 | zip_name = f"resultados_{algorithm}.zip"
166 | zip_path = f"{os.getcwd()}/{zip_name}"
167 |
168 | if os.path.exists(zip_path):
169 | os.remove(zip_path)
170 |
171 | shutil.make_archive(f"resultados_{algorithm}", 'zip', output_dir)
172 | print(f"Pasta '{output_dir}' zipada como '{zip_path}'.")
173 |
174 | try:
175 | from google.colab import files
176 | files.download(zip_path)
177 | print(f"Initiating file download: {zip_path}")
178 | except ImportError:
179 | print(f"Download not available. File saved at {zip_path}")
180 |
181 | def main():
182 | parser = argparse.ArgumentParser(description="Process and visualize detection data.")
183 | parser.add_argument('--graphics', action='store_true', help="Generate and display class distribution graphics.")
184 | parser.add_argument('--download', action='store_true', help="Zip and download the results directory.")
185 | parser.add_argument('--conf', type=float, default=0.25, help="Confidence threshold for filtering detections.")
186 | args = parser.parse_args()
187 |
188 |
189 | latest_exp_directory = get_latest_exp_directory(base_path)
190 |
191 | if latest_exp_directory:
192 | base_directory = str(latest_exp_directory) + "/"
193 | else:
194 | print("Nenhuma detecão encontrada. Execute primeiro o processo de detecção.")
195 | base_directory = None
196 |
197 | time_data = {}
198 |
199 | check_and_create_dir(output_shapefile_directory)
200 | check_and_create_dir(output_csv_directory)
201 |
202 | pasta_labels = os.path.join(base_directory, "labels")
203 | arquivos_txt = glob.glob(os.path.join(pasta_labels, "*.txt"))
204 |
205 | data = []
206 |
207 | for arquivo_txt in arquivos_txt:
208 | filename = os.path.basename(arquivo_txt)[:-4] + ".jpg"
209 | if filename not in coords['filename'].values:
210 | print(f"Coordinates not found for {filename} in the CSV. Skipping.")
211 | continue
212 |
213 | coords_row = coords[coords['filename'] == filename].iloc[0]
214 | utm_xmin, utm_ymin, utm_xmax, utm_ymax = coords_row[['minX', 'minY', 'maxX', 'maxY']]
215 | utm_width = utm_xmax - utm_xmin
216 | utm_height = utm_ymax - utm_ymin
217 |
218 | with open(arquivo_txt, "r") as txt_file:
219 | for line in txt_file:
220 | parts = line.split()
221 | if len(parts) != 6:
222 | continue
223 |
224 | class_id, cse_x, cse_y, width, height, confidence = map(float, parts)
225 | bb_xcenter = utm_xmin + cse_x * utm_width
226 | bb_ycenter = utm_ymax - cse_y * utm_height
227 | bb_xmin = bb_xcenter - (width * utm_width / 2)
228 | bb_ymin = bb_ycenter - (height * utm_height / 2)
229 | bb_xmax = bb_xmin + width * utm_width
230 | bb_ymax = bb_ymin + height * utm_height
231 |
232 | data.append([filename, class_id, cse_x, cse_y, width, height,confidence, bb_xmin, bb_ymin, bb_xmax, bb_ymax])
233 |
234 | confidence_threshold = args.conf
235 |
236 | if data:
237 | df = pd.DataFrame(data, columns=['filename', 'class_id', 'cse_x', 'cse_y', 'width', 'height','confidence', 'bb_xmin', 'bb_ymin', 'bb_xmax', 'bb_ymax'])
238 | df['num_tiles'] = len(arquivos_txt)
239 | df = df[df['confidence'] >= confidence_threshold]
240 |
241 | df = calculate_width_height(df)
242 |
243 | geometry = [box(x1, y1, x2, y2) for x1, y1, x2, y2 in zip(df['bb_xmin'], df['bb_ymin'], df['bb_xmax'], df['bb_ymax'])]
244 | gdf = gpd.GeoDataFrame(df, geometry=geometry, crs=crs)
245 |
246 |
247 | else:
248 | print("No data found in the .txt files. No file will be created.")
249 |
250 | iou_threshold = 0.20
251 | alpha = 0.20
252 |
253 | min_aspect_ratio = 0.5
254 | max_aspect_ratio = 2
255 |
256 | gdf_filtered = filter_rectangular_bounding_boxes(gdf, min_aspect_ratio, max_aspect_ratio)
257 | gdf_filtered = gdf_filtered.reset_index(drop=True)
258 | gdf_nms = apply_nms_with_score(gdf_filtered, iou_threshold, alpha)
259 |
260 |
261 | species_category = categories.get(algorithm, [])
262 | species_codes = [item["specie"] for item in species_category]
263 |
264 |
265 | gdf_nms['name'] = gdf_nms['class_id'].apply(
266 | lambda x: species_dict.get(species_codes[int(x)], {'common_name': 'Desconhecido'})['common_name']
267 | if int(x) < len(species_codes) else 'Desconhecido'
268 | )
269 |
270 | gdf_nms['sci_name'] = gdf_nms['class_id'].apply(
271 | lambda x: species_dict.get(species_codes[int(x)], {'scientific_name': 'Desconhecido'})['scientific_name']
272 | if int(x) < len(species_codes) and species_dict.get(species_codes[int(x)]) else 'Desconhecido'
273 | )
274 |
275 |
276 | gdf_nms_final = gdf_nms[['filename', 'class_id', 'name', 'sci_name', 'confidence','width_m', 'height_m', 'geometry']].copy()
277 |
278 |
279 | gdf_nms_final.to_file(f'{output_shapefile_directory}/resultados_{algorithm}.shp')
280 | csv_path = os.path.join(output_csv_directory, f'resultados_{algorithm}.csv')
281 | gdf_nms_final.to_csv(csv_path, index=False)
282 |
283 |
284 | if args.graphics:
285 |
286 | print("Gerando relatório...")
287 | print(f'Os resultados da detecção de {algorithm} foram salvos na pasta results')
288 | plot_class_distribution(gdf_nms_final, output_dir, algorithm)
289 | showResults()
290 |
291 |
292 |
293 |
294 | if args.download:
295 |
296 | print("Zipping and downloading...")
297 | downloadResults()
298 |
299 | if __name__ == "__main__":
300 | main()
301 |
--------------------------------------------------------------------------------
/tiles.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: tiles.py
4 | Origin: Netflora (hhttps://github.com/NetFlora/Netflora)
5 |
6 | """
7 |
8 | import ipywidgets as widgets
9 | from IPython.display import display, clear_output, HTML
10 | import rasterio
11 | from rasterio.windows import Window
12 | from utils.credentials import credentials
13 | from PIL import Image
14 | import numpy as np
15 | import pandas as pd
16 | import shutil
17 | from tqdm.notebook import tqdm
18 | import requests
19 | import os
20 | import csv
21 | import json
22 |
23 |
24 |
25 | class TileGenerator:
26 | def __init__(self):
27 | self.verified = False
28 | self.attempted_verification = False
29 | self.crs = None
30 | self.specs = {
31 | 'Açaí': {'name': 'Acai', 'size': 1536, 'overlap': 128, 'link': 'https://github.com/NetFlora/Netflora/releases/download/Assets/ACAI_Embrapa00.pt'},
32 | 'Palmeiras': {'name': 'Palmeiras', 'size': 1536, 'overlap': 256,'link': 'https://github.com/NetFlora/Netflora/releases/download/Assets/PALMEIRAS_Embrapa00.pt'},
33 | 'Castanheira': {'name': 'Castanheira', 'size': 2048, 'overlap': 1024, 'link': None},
34 | 'PMFS': {'name': 'PMFS', 'size': 1536, 'overlap': 768, 'link': 'https://github.com/NetFlora/Netflora/releases/download/Assets/PMFS_Embrapa00.pt'},
35 | 'PFNMs': {'name': 'PFNMs', 'size': 1536, 'overlap': 512, 'link': 'https://github.com/NetFlora/Netflora/releases/download/Assets/NM_Embrapa00.pt'},
36 | 'Ecológico': {'name': 'Ecologico', 'size': 3000, 'overlap': 0, 'link': None},
37 | }
38 | self.setup_ui()
39 | self.verify()
40 |
41 | def verify(self):
42 | if self.attempted_verification:
43 | return self.verified
44 | self.attempted_verification = True
45 |
46 | try:
47 | with open('json/response_status.json', 'r', encoding='utf-8') as file:
48 | variables = json.load(file)
49 | if variables['status_code'] == 200:
50 | self.verified = True
51 |
52 | except FileNotFoundError:
53 | pass
54 | except json.JSONDecodeError as e:
55 | pass
56 | if not self.verified:
57 | display(HTML('Por gentileza, aceite o Termo de Uso!'))
58 | display(HTML('Por favor, preencha os dados e rode essa cécula novamente!'))
59 | if credentials():
60 | self.verified = True
61 |
62 |
63 | if self.verified:
64 | self.enable_ui()
65 |
66 | def enable_ui(self):
67 | """Enable UI elements after successful verification."""
68 | self.image_path_text.disabled = False
69 | self.dropdown.disabled = False
70 | self.button.disabled = False
71 |
72 |
73 | def download_model_weights(self, url, output_path):
74 | if url is not None:
75 | response = requests.get(url)
76 | with open(output_path, 'wb') as f:
77 | f.write(response.content)
78 |
79 |
80 | def create_tiles_with_overlap_and_save_coords(self, image_path, tile_size, overlap, output_dir, csv_path):
81 | if not os.path.exists(output_dir):
82 | os.makedirs(output_dir)
83 |
84 | tile_counter = 0
85 |
86 | with rasterio.open(image_path) as src:
87 | self.crs = src.crs
88 | with open(csv_path, mode='w', newline='') as file:
89 | writer = csv.writer(file)
90 | writer.writerow(['filename', 'minX', 'minY', 'maxX', 'maxY', 'crs'])
91 |
92 | width, height = src.width, src.height
93 | total_tiles = ((height - overlap) // (tile_size - overlap)) * ((width - overlap) // (tile_size - overlap))
94 |
95 | pbar_creation = tqdm(total=total_tiles, desc="Creating Tiles")
96 |
97 | for i in range(0, height, tile_size - overlap):
98 | for j in range(0, width, tile_size - overlap):
99 | w = min(tile_size, width - j)
100 | h = min(tile_size, height - i)
101 | window = Window(j, i, w, h)
102 | transform = src.window_transform(window)
103 | tile = src.read(window=window)
104 |
105 | if np.any(tile):
106 | if tile.shape[0] == 4:
107 | tile_image = Image.fromarray(np.moveaxis(tile, 0, -1)).convert('RGB')
108 | else:
109 | tile_image = Image.fromarray(np.moveaxis(tile, 0, -1))
110 |
111 | tile_filename = f'tile_{tile_counter}.jpg'
112 | tile_image.save(os.path.join(output_dir, tile_filename), 'JPEG')
113 |
114 | bounds = rasterio.transform.array_bounds(h, w, transform)
115 | writer.writerow([tile_filename, bounds[0], bounds[1], bounds[2], bounds[3], str(self.crs)])
116 |
117 | tile_counter += 1
118 |
119 | pbar_creation.update(1)
120 |
121 | pbar_creation.close()
122 |
123 | pbar_verification = tqdm(total=tile_counter, desc="Processing Tiles")
124 | for _ in os.listdir(output_dir):
125 | pbar_verification.update(1)
126 | pbar_verification.close()
127 |
128 | return tile_counter
129 |
130 | def get_tif_center(self, image_path):
131 | with rasterio.open(image_path) as tif:
132 | center_x = (tif.bounds.left + tif.bounds.right) / 2
133 | center_y = (tif.bounds.top + tif.bounds.bottom) / 2
134 | return center_x, center_y
135 |
136 |
137 | def find_closest_images(self, csv_path, center, max_distance=100, max_images=5, images_folder='output_tiles', output_folder='processing/selected_images'):
138 |
139 | df = pd.read_csv(csv_path)
140 |
141 |
142 | df['center_x'] = (df['minX'] + df['maxX']) / 2
143 | df['center_y'] = (df['minY'] + df['maxY']) / 2
144 |
145 | distances = np.sqrt((df['center_x'] - center[0]) ** 2 + (df['center_y'] - center[1]) ** 2)
146 | df['distance'] = distances
147 |
148 | closest_images = df[df['distance'] <= max_distance].nsmallest(max_images, 'distance')
149 |
150 | if not os.path.exists(output_folder):
151 | os.makedirs(output_folder)
152 |
153 | for _, row in closest_images.iterrows():
154 | image_path = os.path.join(images_folder, row['filename'])
155 | output_image_path = os.path.join(output_folder, row['filename'])
156 | if os.path.exists(image_path):
157 | shutil.copy(image_path, output_image_path)
158 | else:
159 | print(f"A imagem {row['filename']} não foi encontrada em {images_folder}.")
160 |
161 | print(f"{len(closest_images)} imagens foram copiadas para {output_folder}.")
162 |
163 | def setup_ui(self):
164 | self.image_path_text = widgets.Text(
165 | value='',
166 | placeholder='Insira o caminho da ortofoto aqui',
167 | description='Ortofoto:',
168 | disabled=True
169 | )
170 |
171 | self.dropdown = widgets.Dropdown(
172 | options=['Selecione'] + list(self.specs.keys()),
173 | value='Selecione',
174 | description='Algoritmo:',
175 | disabled=True
176 | )
177 |
178 | self.button = widgets.Button(description="Gerar Tiles", disabled=True)
179 | self.output = widgets.Output()
180 |
181 | self.button.on_click(self.on_button_clicked)
182 |
183 | self.dropdown.observe(self.on_algorithm_change, names='value')
184 |
185 | display(self.image_path_text, self.dropdown, self.button, self.output)
186 |
187 | def on_algorithm_change(self, change):
188 |
189 | with self.output:
190 | clear_output(wait=True)
191 | if change['new'] == 'Selecione':
192 |
193 | display(HTML('Por favor, selecione um algoritmo.'))
194 | else:
195 |
196 | display(HTML(f'O algoritmo {change["new"]} foi selecionado.'))
197 |
198 | def on_button_clicked(self, b):
199 | with self.output:
200 | clear_output()
201 | if self.dropdown.value == 'Selecione':
202 | display(HTML('Por favor, selecione um algoritmo.'))
203 | else:
204 | selected_spec = self.specs[self.dropdown.value]
205 |
206 |
207 | if selected_spec['link'] is None:
208 |
209 | display(HTML(f'O algoritmo {selected_spec["name"]} está em fase de desenvolvimento. Estamos trabalhando para disponibilizá-lo em breve. Fique atento(a) às próximas atualizações!'))
210 | else:
211 |
212 | image_path = self.image_path_text.value
213 | output_dir = 'processing/output_tiles'
214 | csv_path = 'processing/tile_coords.csv'
215 |
216 | model_weights_path = 'model_weights.pt'
217 | display(HTML('Carregando algoritmo...'))
218 | self.download_model_weights(selected_spec['link'], model_weights_path)
219 | display(HTML('O algoritmo foi carregado com sucesso.'))
220 |
221 | num_tiles = self.create_tiles_with_overlap_and_save_coords(image_path, selected_spec['size'], selected_spec['overlap'], output_dir, csv_path)
222 | display(HTML(f'Número total de tiles criados: {num_tiles}'))
223 |
224 | center = self.get_tif_center(image_path)
225 | self.find_closest_images(csv_path, center, max_distance=100, max_images=5, images_folder=output_dir, output_folder='processing/selected_images')
226 |
227 | variables = {
228 | 'crs': str(self.crs),
229 | 'algorithm': selected_spec['name'],
230 | 'tile_size': selected_spec['size'],
231 | 'overlap': selected_spec['overlap']
232 | }
233 |
234 | with open('processing/variable.json', 'w') as f:
235 |
236 | json.dump(variables, f, indent=4)
237 |
238 |
--------------------------------------------------------------------------------
/utils/__init__.py:
--------------------------------------------------------------------------------
1 | # init
--------------------------------------------------------------------------------
/utils/activations.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: activations.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 |
9 |
10 | # Activation functions
11 |
12 | import torch
13 | import torch.nn as nn
14 | import torch.nn.functional as F
15 |
16 |
17 | # SiLU https://arxiv.org/pdf/1606.08415.pdf ----------------------------------------------------------------------------
18 | class SiLU(nn.Module): # export-friendly version of nn.SiLU()
19 | @staticmethod
20 | def forward(x):
21 | return x * torch.sigmoid(x)
22 |
23 |
24 | class Hardswish(nn.Module): # export-friendly version of nn.Hardswish()
25 | @staticmethod
26 | def forward(x):
27 | # return x * F.hardsigmoid(x) # for torchscript and CoreML
28 | return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX
29 |
30 |
31 | class MemoryEfficientSwish(nn.Module):
32 | class F(torch.autograd.Function):
33 | @staticmethod
34 | def forward(ctx, x):
35 | ctx.save_for_backward(x)
36 | return x * torch.sigmoid(x)
37 |
38 | @staticmethod
39 | def backward(ctx, grad_output):
40 | x = ctx.saved_tensors[0]
41 | sx = torch.sigmoid(x)
42 | return grad_output * (sx * (1 + x * (1 - sx)))
43 |
44 | def forward(self, x):
45 | return self.F.apply(x)
46 |
47 |
48 | # Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------
49 | class Mish(nn.Module):
50 | @staticmethod
51 | def forward(x):
52 | return x * F.softplus(x).tanh()
53 |
54 |
55 | class MemoryEfficientMish(nn.Module):
56 | class F(torch.autograd.Function):
57 | @staticmethod
58 | def forward(ctx, x):
59 | ctx.save_for_backward(x)
60 | return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x)))
61 |
62 | @staticmethod
63 | def backward(ctx, grad_output):
64 | x = ctx.saved_tensors[0]
65 | sx = torch.sigmoid(x)
66 | fx = F.softplus(x).tanh()
67 | return grad_output * (fx + x * sx * (1 - fx * fx))
68 |
69 | def forward(self, x):
70 | return self.F.apply(x)
71 |
72 |
73 | # FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------
74 | class FReLU(nn.Module):
75 | def __init__(self, c1, k=3): # ch_in, kernel
76 | super().__init__()
77 | self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)
78 | self.bn = nn.BatchNorm2d(c1)
79 |
80 | def forward(self, x):
81 | return torch.max(x, self.bn(self.conv(x)))
82 |
--------------------------------------------------------------------------------
/utils/add_nms.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: add_nms.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 | import numpy as np
9 | import onnx
10 | from onnx import shape_inference
11 | try:
12 | import onnx_graphsurgeon as gs
13 | except Exception as e:
14 | print('Import onnx_graphsurgeon failure: %s' % e)
15 |
16 | import logging
17 |
18 | LOGGER = logging.getLogger(__name__)
19 |
20 | class RegisterNMS(object):
21 | def __init__(
22 | self,
23 | onnx_model_path: str,
24 | precision: str = "fp32",
25 | ):
26 |
27 | self.graph = gs.import_onnx(onnx.load(onnx_model_path))
28 | assert self.graph
29 | LOGGER.info("ONNX graph created successfully")
30 | # Fold constants via ONNX-GS that PyTorch2ONNX may have missed
31 | self.graph.fold_constants()
32 | self.precision = precision
33 | self.batch_size = 1
34 | def infer(self):
35 | """
36 | Sanitize the graph by cleaning any unconnected nodes, do a topological resort,
37 | and fold constant inputs values. When possible, run shape inference on the
38 | ONNX graph to determine tensor shapes.
39 | """
40 | for _ in range(3):
41 | count_before = len(self.graph.nodes)
42 |
43 | self.graph.cleanup().toposort()
44 | try:
45 | for node in self.graph.nodes:
46 | for o in node.outputs:
47 | o.shape = None
48 | model = gs.export_onnx(self.graph)
49 | model = shape_inference.infer_shapes(model)
50 | self.graph = gs.import_onnx(model)
51 | except Exception as e:
52 | LOGGER.info(f"Shape inference could not be performed at this time:\n{e}")
53 | try:
54 | self.graph.fold_constants(fold_shapes=True)
55 | except TypeError as e:
56 | LOGGER.error(
57 | "This version of ONNX GraphSurgeon does not support folding shapes, "
58 | f"please upgrade your onnx_graphsurgeon module. Error:\n{e}"
59 | )
60 | raise
61 |
62 | count_after = len(self.graph.nodes)
63 | if count_before == count_after:
64 | # No new folding occurred in this iteration, so we can stop for now.
65 | break
66 |
67 | def save(self, output_path):
68 | """
69 | Save the ONNX model to the given location.
70 | Args:
71 | output_path: Path pointing to the location where to write
72 | out the updated ONNX model.
73 | """
74 | self.graph.cleanup().toposort()
75 | model = gs.export_onnx(self.graph)
76 | onnx.save(model, output_path)
77 | LOGGER.info(f"Saved ONNX model to {output_path}")
78 |
79 | def register_nms(
80 | self,
81 | *,
82 | score_thresh: float = 0.25,
83 | nms_thresh: float = 0.45,
84 | detections_per_img: int = 100,
85 | ):
86 | """
87 | Register the ``EfficientNMS_TRT`` plugin node.
88 | NMS expects these shapes for its input tensors:
89 | - box_net: [batch_size, number_boxes, 4]
90 | - class_net: [batch_size, number_boxes, number_labels]
91 | Args:
92 | score_thresh (float): The scalar threshold for score (low scoring boxes are removed).
93 | nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU
94 | overlap with previously selected boxes are removed).
95 | detections_per_img (int): Number of best detections to keep after NMS.
96 | """
97 |
98 | self.infer()
99 | # Find the concat node at the end of the network
100 | op_inputs = self.graph.outputs
101 | op = "EfficientNMS_TRT"
102 | attrs = {
103 | "plugin_version": "1",
104 | "background_class": -1, # no background class
105 | "max_output_boxes": detections_per_img,
106 | "score_threshold": score_thresh,
107 | "iou_threshold": nms_thresh,
108 | "score_activation": False,
109 | "box_coding": 0,
110 | }
111 |
112 | if self.precision == "fp32":
113 | dtype_output = np.float32
114 | elif self.precision == "fp16":
115 | dtype_output = np.float16
116 | else:
117 | raise NotImplementedError(f"Currently not supports precision: {self.precision}")
118 |
119 | # NMS Outputs
120 | output_num_detections = gs.Variable(
121 | name="num_dets",
122 | dtype=np.int32,
123 | shape=[self.batch_size, 1],
124 | ) # A scalar indicating the number of valid detections per batch image.
125 | output_boxes = gs.Variable(
126 | name="det_boxes",
127 | dtype=dtype_output,
128 | shape=[self.batch_size, detections_per_img, 4],
129 | )
130 | output_scores = gs.Variable(
131 | name="det_scores",
132 | dtype=dtype_output,
133 | shape=[self.batch_size, detections_per_img],
134 | )
135 | output_labels = gs.Variable(
136 | name="det_classes",
137 | dtype=np.int32,
138 | shape=[self.batch_size, detections_per_img],
139 | )
140 |
141 | op_outputs = [output_num_detections, output_boxes, output_scores, output_labels]
142 |
143 | # Create the NMS Plugin node with the selected inputs. The outputs of the node will also
144 | # become the final outputs of the graph.
145 | self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs)
146 | LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}")
147 |
148 | self.graph.outputs = op_outputs
149 |
150 | self.infer()
151 |
152 | def save(self, output_path):
153 | """
154 | Save the ONNX model to the given location.
155 | Args:
156 | output_path: Path pointing to the location where to write
157 | out the updated ONNX model.
158 | """
159 | self.graph.cleanup().toposort()
160 | model = gs.export_onnx(self.graph)
161 | onnx.save(model, output_path)
162 | LOGGER.info(f"Saved ONNX model to {output_path}")
163 |
--------------------------------------------------------------------------------
/utils/autoanchor.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: autoanchor.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 | # Auto-anchor utils
9 |
10 | import numpy as np
11 | import torch
12 | import yaml
13 | from scipy.cluster.vq import kmeans
14 | from tqdm import tqdm
15 |
16 | from utils.general import colorstr
17 |
18 |
19 | def check_anchor_order(m):
20 | # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary
21 | a = m.anchor_grid.prod(-1).view(-1) # anchor area
22 | da = a[-1] - a[0] # delta a
23 | ds = m.stride[-1] - m.stride[0] # delta s
24 | if da.sign() != ds.sign(): # same order
25 | print('Reversing anchor order')
26 | m.anchors[:] = m.anchors.flip(0)
27 | m.anchor_grid[:] = m.anchor_grid.flip(0)
28 |
29 |
30 | def check_anchors(dataset, model, thr=4.0, imgsz=640):
31 | # Check anchor fit to data, recompute if necessary
32 | prefix = colorstr('autoanchor: ')
33 | print(f'\n{prefix}Analyzing anchors... ', end='')
34 | m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
35 | shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
36 | scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
37 | wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
38 |
39 | def metric(k): # compute metric
40 | r = wh[:, None] / k[None]
41 | x = torch.min(r, 1. / r).min(2)[0] # ratio metric
42 | best = x.max(1)[0] # best_x
43 | aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold
44 | bpr = (best > 1. / thr).float().mean() # best possible recall
45 | return bpr, aat
46 |
47 | anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors
48 | bpr, aat = metric(anchors)
49 | print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='')
50 | if bpr < 0.98: # threshold to recompute
51 | print('. Attempting to improve anchors, please wait...')
52 | na = m.anchor_grid.numel() // 2 # number of anchors
53 | try:
54 | anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
55 | except Exception as e:
56 | print(f'{prefix}ERROR: {e}')
57 | new_bpr = metric(anchors)[0]
58 | if new_bpr > bpr: # replace anchors
59 | anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
60 | m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference
61 | check_anchor_order(m)
62 | m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss
63 | print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.')
64 | else:
65 | print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.')
66 | print('') # newline
67 |
68 |
69 | def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
70 | """ Creates kmeans-evolved anchors from training dataset
71 |
72 | Arguments:
73 | path: path to dataset *.yaml, or a loaded dataset
74 | n: number of anchors
75 | img_size: image size used for training
76 | thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
77 | gen: generations to evolve anchors using genetic algorithm
78 | verbose: print all results
79 |
80 | Return:
81 | k: kmeans evolved anchors
82 |
83 | Usage:
84 | from utils.autoanchor import *; _ = kmean_anchors()
85 | """
86 | thr = 1. / thr
87 | prefix = colorstr('autoanchor: ')
88 |
89 | def metric(k, wh): # compute metrics
90 | r = wh[:, None] / k[None]
91 | x = torch.min(r, 1. / r).min(2)[0] # ratio metric
92 | # x = wh_iou(wh, torch.tensor(k)) # iou metric
93 | return x, x.max(1)[0] # x, best_x
94 |
95 | def anchor_fitness(k): # mutation fitness
96 | _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
97 | return (best * (best > thr).float()).mean() # fitness
98 |
99 | def print_results(k):
100 | k = k[np.argsort(k.prod(1))] # sort small to large
101 | x, best = metric(k, wh0)
102 | bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
103 | print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr')
104 | print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, '
105 | f'past_thr={x[x > thr].mean():.3f}-mean: ', end='')
106 | for i, x in enumerate(k):
107 | print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg
108 | return k
109 |
110 | if isinstance(path, str): # *.yaml file
111 | with open(path) as f:
112 | data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict
113 | from utils.datasets import LoadImagesAndLabels
114 | dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
115 | else:
116 | dataset = path # dataset
117 |
118 | # Get label wh
119 | shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
120 | wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
121 |
122 | # Filter
123 | i = (wh0 < 3.0).any(1).sum()
124 | if i:
125 | print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
126 | wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
127 | # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
128 |
129 | # Kmeans calculation
130 | print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...')
131 | s = wh.std(0) # sigmas for whitening
132 | k, dist = kmeans(wh / s, n, iter=30) # points, mean distance
133 | assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}')
134 | k *= s
135 | wh = torch.tensor(wh, dtype=torch.float32) # filtered
136 | wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered
137 | k = print_results(k)
138 |
139 | # Plot
140 | # k, d = [None] * 20, [None] * 20
141 | # for i in tqdm(range(1, 21)):
142 | # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
143 | # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
144 | # ax = ax.ravel()
145 | # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
146 | # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
147 | # ax[0].hist(wh[wh[:, 0]<100, 0],400)
148 | # ax[1].hist(wh[wh[:, 1]<100, 1],400)
149 | # fig.savefig('wh.png', dpi=200)
150 |
151 | # Evolve
152 | npr = np.random
153 | f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
154 | pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar
155 | for _ in pbar:
156 | v = np.ones(sh)
157 | while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
158 | v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
159 | kg = (k.copy() * v).clip(min=2.0)
160 | fg = anchor_fitness(kg)
161 | if fg > f:
162 | f, k = fg, kg.copy()
163 | pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
164 | if verbose:
165 | print_results(k)
166 |
167 | return print_results(k)
168 |
--------------------------------------------------------------------------------
/utils/batch_detection.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: batch_detection.py
4 | Origin: Netflora (https://github.com/NetFlora/Netflora)
5 |
6 | """
7 |
8 |
9 |
10 | # batch_detection.py
11 | import subprocess
12 | from tqdm import tqdm
13 |
14 | def runBatchDetection(thresholds=[0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50],
15 | detect_script_path='detect.py',
16 | weights_path='model_weights.pt',
17 | img_size=640,
18 | source_path='processing/selected_images',
19 | device=0,
20 | save_txt=False):
21 |
22 | for conf in tqdm(thresholds, desc="Processing thresholds"):
23 | result_name = f'{conf:.2f}'
24 | command = f'python {detect_script_path} --device {device} --weights {weights_path} --img {img_size} --conf {conf} --source {source_path} --name {result_name} --save-txt {save_txt}'
25 | subprocess.run(command, shell=True)
26 |
27 | print("Amostras para vizualização de theshold criadas com sucesso.")
28 |
29 | return
30 |
31 |
--------------------------------------------------------------------------------
/utils/credentials.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: credentials.py
4 | Origin: Netflora (https://github.com/NetFlora/Netflora)
5 |
6 | """
7 |
8 |
9 | from ipywidgets import Button, Text, Dropdown, Output, VBox, HTML, Checkbox
10 | from IPython.display import display, clear_output
11 | from google.colab import drive
12 | import requests
13 | import re
14 | import json
15 |
16 | def format_cep(cep):
17 | if len(cep) == 8 and "-" not in cep:
18 | return f"{cep[:5]}-{cep[5:]}"
19 | return cep
20 |
21 | def fetch_cep_data(cep):
22 | cep = cep.replace("-", "")
23 | if len(cep) == 8:
24 | response = requests.get(f"https://viacep.com.br/ws/{cep}/json/")
25 | if response.status_code == 200:
26 | cep_data = response.json()
27 | if "erro" not in cep_data:
28 | logradouro = cep_data.get('logradouro', '')
29 | bairro = cep_data.get('bairro', '')
30 | cidade = cep_data.get('localidade', '')
31 | estado = cep_data.get('uf', '')
32 | pais = 'Brasil'
33 | return True, logradouro, bairro, cidade, estado, pais
34 | return False, "", "", "", "", ""
35 |
36 | def validar_email(email):
37 | pattern = r"^\w+([\.-]?\w+)@\w+([\.-]?\w+)(\.\w{2,3})+$"
38 | return re.match(pattern, email) is not None
39 |
40 | def credentials():
41 | email_input = Text(placeholder='Informe seu e-mail', description='E-mail:')
42 | name_input = Text(placeholder='Informe seu nome', description='Nome:')
43 | cep_input = Text(placeholder='Informe seu CEP', description='CEP:')
44 | area_input = Text(placeholder='Informe a área mapeada em hectares', description='Área (ha):', tooltip='Informe o tamanho da área mapeada em hectares.')
45 | non_brazil_checkbox = Checkbox(value=False, description="Não resido no Brasil")
46 | country_input = Text(placeholder='Informe seu país', description='País:', disabled=True)
47 | translate_dropdown = Dropdown(options=[('Português', 'pt'), ('English', 'en'), ('Español', 'es')], value='pt', description='Translate:')
48 | confirm_button = Button(description='Aceitar e enviar', button_style='success', tooltip='Enviar dados')
49 | form_output = Output()
50 | terms_checkbox = Checkbox(value=False, description='Eu li e aceito o termo de uso')
51 | terms_text = HTML()
52 |
53 | def toggle_country_input(*args):
54 | country_input.disabled = not non_brazil_checkbox.value
55 |
56 | non_brazil_checkbox.observe(toggle_country_input, 'value')
57 |
58 | messages = {
59 | 'pt': {
60 | 'accept_terms': 'Por favor, aceite os termos de uso para continuar.',
61 | 'valid_email': 'Por favor, forneça um email válido.',
62 | 'enter_name': 'Por favor, informe seu nome.',
63 | 'enter_area': 'Por favor, informe a área em hectares.',
64 | 'enter_country': 'Por favor, informe o país em que reside.',
65 | 'cep_not_found': 'CEP não encontrado.',
66 | 'mounting_drive': 'Montando Google Drive, por favor aguarde...',
67 | 'data_submitted': 'Dados enviados. Drive montado com sucesso.'
68 | },
69 | 'en': {
70 | 'accept_terms': 'Please accept the terms of use to continue.',
71 | 'valid_email': 'Please provide a valid email.',
72 | 'enter_name': 'Please enter your name.',
73 | 'enter_area': 'Please enter the mapped area in hectares.',
74 | 'enter_country': 'Please enter the country you reside in.',
75 | 'cep_not_found': 'ZIP not found.',
76 | 'mounting_drive': 'Mounting Google Drive, please wait...',
77 | 'data_submitted': 'Data submitted. Drive mounted successfully.'
78 | },
79 | 'es': {
80 | 'accept_terms': 'Por favor, acepte los términos de uso para continuar.',
81 | 'valid_email': 'Por favor, proporcione un correo electrónico válido.',
82 | 'enter_name': 'Por favor, ingrese su nombre.',
83 | 'enter_area': 'Por favor, ingrese el área mapeada en hectáreas.',
84 | 'enter_country': 'Por favor, indique el país en que reside.',
85 | 'cep_not_found': 'Código postal no encontrado.',
86 | 'mounting_drive': 'Montando Google Drive, por favor espere...',
87 | 'data_submitted': 'Datos enviados. Drive montado con éxito.'
88 | }
89 | }
90 |
91 | def update_translate(*args):
92 | lang = translate_dropdown.value
93 | if lang == 'en':
94 | email_input.placeholder = 'Enter your email'
95 | name_input.placeholder = 'Enter your name'
96 | cep_input.placeholder = 'Enter your ZIP code'
97 | area_input.placeholder = 'Enter mapped area in hectares'
98 | country_input.placeholder = 'Enter your country'
99 | confirm_button.description = 'Accept and Send'
100 | non_brazil_checkbox.description = "I do not reside in Brazil"
101 | terms_checkbox.description = 'I agree to the term of use'
102 | elif lang == 'es':
103 | email_input.placeholder = 'Ingrese su correo electrónico'
104 | name_input.placeholder = 'Ingrese su nombre'
105 | cep_input.placeholder = 'Ingrese su código postal'
106 | area_input.placeholder = 'Ingrese el área mapeada en hectáreas'
107 | country_input.placeholder = 'Ingrese su país'
108 | confirm_button.description = 'Aceptar y enviar'
109 | non_brazil_checkbox.description = "No resido en Brasil"
110 | terms_checkbox.description = 'He leído y acepto los términos de uso'
111 | else: # Default to Portuguese
112 | email_input.placeholder = 'Informe seu e-mail'
113 | name_input.placeholder = 'Informe seu nome'
114 | cep_input.placeholder = 'Informe seu CEP'
115 | area_input.placeholder = 'Informe a área em hectares'
116 | country_input.placeholder = 'Informe seu país'
117 | confirm_button.description = 'Aceitar e enviar'
118 | non_brazil_checkbox.description = "Não resido no Brasil"
119 | terms_checkbox.description = 'Eu li e aceito o termo de uso'
120 |
121 | update_terms_text(lang)
122 |
123 | def update_terms_text(lang):
124 | if lang == 'en':
125 | terms_text.value = 'Please read the term of use carefully before submitting your data. By checking the box below, you agree to the terms of use.
'
126 | elif lang == 'es':
127 | terms_text.value = 'Por favor, lea cuidadosamente el término de uso antes de enviar sus datos. Al marcar la casilla a continuación, usted acepta el término de uso.
'
128 | else: # Default to Portuguese
129 | terms_text.value = 'Por favor, leia cuidadosamente o termo de uso antes de enviar seus dados. Ao marcar a caixa abaixo, você concorda com o termo de uso.
'
130 |
131 | translate_dropdown.observe(update_translate, names='value')
132 | update_translate()
133 |
134 |
135 | def confirm_send(b):
136 | lang = translate_dropdown.value
137 | msg = messages[lang]
138 | with form_output:
139 | clear_output()
140 |
141 | if not validar_email(email_input.value):
142 | display(HTML(f'{msg["valid_email"]}'))
143 | return
144 |
145 | if not name_input.value.strip():
146 | display(HTML(f'{msg["enter_name"]}'))
147 | return
148 |
149 | cidade = ""
150 | estado = ""
151 | cep_valid = True
152 | if not non_brazil_checkbox.value:
153 | formatted_cep = format_cep(cep_input.value)
154 | cep_valid, logradouro, bairro, cidade, estado, pais = fetch_cep_data(formatted_cep)
155 | if not cep_valid:
156 | display(HTML(f'{msg["cep_not_found"]}'))
157 | return
158 |
159 | if not area_input.value.strip():
160 | display(HTML(f'{msg["enter_area"]}'))
161 | return
162 |
163 | if non_brazil_checkbox.value and not country_input.value:
164 | display(HTML(f'{msg["enter_country"]}'))
165 | return
166 |
167 | if not terms_checkbox.value:
168 | display(HTML(f'{msg["accept_terms"]}'))
169 | return
170 |
171 | display(HTML(f'{msg["mounting_drive"]}'))
172 | drive.mount('/content/drive')
173 | display(HTML(f'{msg["data_submitted"]}'))
174 |
175 | country = country_input.value if country_input.value else 'Brasil'
176 |
177 | form_data = {
178 | 'entry.79837568': name_input.value,
179 | 'entry.31901897': email_input.value,
180 | 'entry.1472348248': cep_input.value if cep_valid else "",
181 | 'entry.276405757': cidade if cep_valid else "",
182 | 'entry.839721720': estado if cep_valid else "",
183 | 'entry.807575090': country,
184 | 'entry.1662418940': area_input.value,
185 | }
186 | url = 'https://docs.google.com/forms/u/0/d/e/1FAIpQLSeiyE0r9ddUEMWVSbaRNGzoHhjRIp4DQH5branuxqO1eHg2Ag/formResponse'
187 | response = requests.post(url, data=form_data)
188 |
189 | response_data = {'status_code': response.status_code}
190 | with open('json/response_status.json', 'w') as file:
191 | json.dump(response_data, file)
192 |
193 |
194 |
195 | if response.status_code != 200:
196 | display(HTML('Falha ao enviar o formulário. Status Code: ' + str(response.status_code) + ''))
197 |
198 | confirm_button.on_click(confirm_send)
199 | display(VBox([translate_dropdown, email_input, name_input, cep_input, area_input, non_brazil_checkbox, country_input, terms_text, terms_checkbox, confirm_button, form_output]))
200 |
201 | if __name__ == "__main__":
202 | credentials()
203 |
--------------------------------------------------------------------------------
/utils/google_utils.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: google_utils.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 |
9 | # Google utils: https://cloud.google.com/storage/docs/reference/libraries
10 |
11 | import os
12 | import platform
13 | import subprocess
14 | import time
15 | from pathlib import Path
16 |
17 | import requests
18 | import torch
19 |
20 |
21 | def gsutil_getsize(url=''):
22 | # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
23 | s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
24 | return eval(s.split(' ')[0]) if len(s) else 0 # bytes
25 |
26 |
27 | def attempt_download(file, repo='WongKinYiu/yolov7'):
28 | # Attempt file download if does not exist
29 | file = Path(str(file).strip().replace("'", '').lower())
30 |
31 | if not file.exists():
32 | try:
33 | response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api
34 | assets = [x['name'] for x in response['assets']] # release assets
35 | tag = response['tag_name'] # i.e. 'v1.0'
36 | except: # fallback plan
37 | assets = ['yolov7.pt', 'yolov7-tiny.pt', 'yolov7x.pt', 'yolov7-d6.pt', 'yolov7-e6.pt',
38 | 'yolov7-e6e.pt', 'yolov7-w6.pt']
39 | tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
40 |
41 | name = file.name
42 | if name in assets:
43 | msg = f'{file} missing, try downloading from https://github.com/{repo}/releases/'
44 | redundant = False # second download option
45 | try: # GitHub
46 | url = f'https://github.com/{repo}/releases/download/{tag}/{name}'
47 | print(f'Downloading {url} to {file}...')
48 | torch.hub.download_url_to_file(url, file)
49 | assert file.exists() and file.stat().st_size > 1E6 # check
50 | except Exception as e: # GCP
51 | print(f'Download error: {e}')
52 | assert redundant, 'No secondary mirror'
53 | url = f'https://storage.googleapis.com/{repo}/ckpt/{name}'
54 | print(f'Downloading {url} to {file}...')
55 | os.system(f'curl -L {url} -o {file}') # torch.hub.download_url_to_file(url, weights)
56 | finally:
57 | if not file.exists() or file.stat().st_size < 1E6: # check
58 | file.unlink(missing_ok=True) # remove partial downloads
59 | print(f'ERROR: Download failure: {msg}')
60 | print('')
61 | return
62 |
63 |
64 | def gdrive_download(id='', file='tmp.zip'):
65 | # Downloads a file from Google Drive. from yolov7.utils.google_utils import *; gdrive_download()
66 | t = time.time()
67 | file = Path(file)
68 | cookie = Path('cookie') # gdrive cookie
69 | print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
70 | file.unlink(missing_ok=True) # remove existing file
71 | cookie.unlink(missing_ok=True) # remove existing cookie
72 |
73 | # Attempt file download
74 | out = "NUL" if platform.system() == "Windows" else "/dev/null"
75 | os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
76 | if os.path.exists('cookie'): # large file
77 | s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
78 | else: # small file
79 | s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
80 | r = os.system(s) # execute, capture return
81 | cookie.unlink(missing_ok=True) # remove existing cookie
82 |
83 | # Error check
84 | if r != 0:
85 | file.unlink(missing_ok=True) # remove partial
86 | print('Download error ') # raise Exception('Download error')
87 | return r
88 |
89 | # Unzip if archive
90 | if file.suffix == '.zip':
91 | print('unzipping... ', end='')
92 | os.system(f'unzip -q {file}') # unzip
93 | file.unlink() # remove zip to free space
94 |
95 | print(f'Done ({time.time() - t:.1f}s)')
96 | return r
97 |
98 |
99 | def get_token(cookie="./cookie"):
100 | with open(cookie) as f:
101 | for line in f:
102 | if "download" in line:
103 | return line.split()[-1]
104 | return ""
105 |
106 | # def upload_blob(bucket_name, source_file_name, destination_blob_name):
107 | # # Uploads a file to a bucket
108 | # # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
109 | #
110 | # storage_client = storage.Client()
111 | # bucket = storage_client.get_bucket(bucket_name)
112 | # blob = bucket.blob(destination_blob_name)
113 | #
114 | # blob.upload_from_filename(source_file_name)
115 | #
116 | # print('File {} uploaded to {}.'.format(
117 | # source_file_name,
118 | # destination_blob_name))
119 | #
120 | #
121 | # def download_blob(bucket_name, source_blob_name, destination_file_name):
122 | # # Uploads a blob from a bucket
123 | # storage_client = storage.Client()
124 | # bucket = storage_client.get_bucket(bucket_name)
125 | # blob = bucket.blob(source_blob_name)
126 | #
127 | # blob.download_to_filename(destination_file_name)
128 | #
129 | # print('Blob {} downloaded to {}.'.format(
130 | # source_blob_name,
131 | # destination_file_name))
132 |
--------------------------------------------------------------------------------
/utils/map_utils.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: autoanchor.py
4 | Origin: Netflora (https://github.com/NetFlora/Netflora)
5 |
6 | """
7 |
8 | import folium
9 | import geopandas as gpd
10 | import branca.colormap as cm
11 | import json
12 |
13 | with open('processing/variable.json', 'r') as file:
14 | variables = json.load(file)
15 |
16 | crs = variables['crs']
17 | algorithm = variables['algorithm']
18 |
19 |
20 | gdf_path = f'results/shapefiles/resultados_{algorithm}.shp'
21 |
22 | gdf = gpd.read_file(gdf_path)
23 |
24 | def createMap():
25 |
26 | gdf_reproj = gdf.to_crs(epsg=4326)
27 |
28 |
29 | centroide = gdf_reproj.unary_union.centroid
30 |
31 |
32 | geojson_data = gdf_reproj.to_json()
33 |
34 |
35 | mapa = folium.Map(location=[centroide.y, centroide.x], zoom_start=17, tiles=None)
36 |
37 |
38 | _add_layers(mapa)
39 |
40 |
41 | paleta_cores = cm.linear.Set1_09.scale(0, gdf_reproj['class_id'].max())
42 |
43 |
44 | geojson_layer = folium.GeoJson(
45 | geojson_data,
46 | name='Shapefile',
47 | style_function=lambda feature: {
48 | 'fillColor': _get_color(feature, paleta_cores),
49 | 'color': 'black',
50 | 'weight': 1,
51 | 'opacity': 0.8,
52 | 'fillOpacity': 1
53 | }
54 | ).add_to(mapa)
55 |
56 |
57 | paleta_cores.caption = 'Classes'
58 | paleta_cores.add_to(mapa)
59 | folium.LayerControl().add_to(mapa)
60 |
61 | return mapa
62 |
63 | def _add_layers(mapa):
64 | folium.TileLayer(
65 | tiles='https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png',
66 | attr='OpenStreetMap',
67 | name='OpenStreetMap').add_to(mapa)
68 | folium.TileLayer(
69 | tiles='https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}',
70 | attr='Esri',
71 | name='Esri Satellite',
72 | overlay=False
73 | ).add_to(mapa)
74 |
75 | def _get_color(feature, paleta_cores):
76 | class_id = feature['properties']['class_id']
77 | return paleta_cores(class_id)
78 |
--------------------------------------------------------------------------------
/utils/metrics.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: metrics.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 |
9 | # Model validation metrics
10 |
11 | from pathlib import Path
12 |
13 | import matplotlib.pyplot as plt
14 | import numpy as np
15 | import torch
16 |
17 | from . import general
18 |
19 |
20 | def fitness(x):
21 | # Model fitness as a weighted combination of metrics
22 | w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
23 | return (x[:, :4] * w).sum(1)
24 |
25 |
26 | def ap_per_class(tp, conf, pred_cls, target_cls, v5_metric=False, plot=False, save_dir='.', names=()):
27 | """ Compute the average precision, given the recall and precision curves.
28 | Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
29 | # Arguments
30 | tp: True positives (nparray, nx1 or nx10).
31 | conf: Objectness value from 0-1 (nparray).
32 | pred_cls: Predicted object classes (nparray).
33 | target_cls: True object classes (nparray).
34 | plot: Plot precision-recall curve at mAP@0.5
35 | save_dir: Plot save directory
36 | # Returns
37 | The average precision as computed in py-faster-rcnn.
38 | """
39 |
40 | # Sort by objectness
41 | i = np.argsort(-conf)
42 | tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
43 |
44 | # Find unique classes
45 | unique_classes = np.unique(target_cls)
46 | nc = unique_classes.shape[0] # number of classes, number of detections
47 |
48 | # Create Precision-Recall curve and compute AP for each class
49 | px, py = np.linspace(0, 1, 1000), [] # for plotting
50 | ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
51 | for ci, c in enumerate(unique_classes):
52 | i = pred_cls == c
53 | n_l = (target_cls == c).sum() # number of labels
54 | n_p = i.sum() # number of predictions
55 |
56 | if n_p == 0 or n_l == 0:
57 | continue
58 | else:
59 | # Accumulate FPs and TPs
60 | fpc = (1 - tp[i]).cumsum(0)
61 | tpc = tp[i].cumsum(0)
62 |
63 | # Recall
64 | recall = tpc / (n_l + 1e-16) # recall curve
65 | r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
66 |
67 | # Precision
68 | precision = tpc / (tpc + fpc) # precision curve
69 | p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
70 |
71 | # AP from recall-precision curve
72 | for j in range(tp.shape[1]):
73 | ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j], v5_metric=v5_metric)
74 | if plot and j == 0:
75 | py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
76 |
77 | # Compute F1 (harmonic mean of precision and recall)
78 | f1 = 2 * p * r / (p + r + 1e-16)
79 | if plot:
80 | plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
81 | plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
82 | plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
83 | plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
84 |
85 | i = f1.mean(0).argmax() # max F1 index
86 | return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
87 |
88 |
89 | def compute_ap(recall, precision, v5_metric=False):
90 | """ Compute the average precision, given the recall and precision curves
91 | # Arguments
92 | recall: The recall curve (list)
93 | precision: The precision curve (list)
94 | v5_metric: Assume maximum recall to be 1.0, as in YOLOv5, MMDetetion etc.
95 | # Returns
96 | Average precision, precision curve, recall curve
97 | """
98 |
99 | # Append sentinel values to beginning and end
100 | if v5_metric: # New YOLOv5 metric, same as MMDetection and Detectron2 repositories
101 | mrec = np.concatenate(([0.], recall, [1.0]))
102 | else: # Old YOLOv5 metric, i.e. default YOLOv7 metric
103 | mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01]))
104 | mpre = np.concatenate(([1.], precision, [0.]))
105 |
106 | # Compute the precision envelope
107 | mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
108 |
109 | # Integrate area under curve
110 | method = 'interp' # methods: 'continuous', 'interp'
111 | if method == 'interp':
112 | x = np.linspace(0, 1, 101) # 101-point interp (COCO)
113 | ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
114 | else: # 'continuous'
115 | i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
116 | ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
117 |
118 | return ap, mpre, mrec
119 |
120 |
121 | class ConfusionMatrix:
122 | # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
123 | def __init__(self, nc, conf=0.25, iou_thres=0.45):
124 | self.matrix = np.zeros((nc + 1, nc + 1))
125 | self.nc = nc # number of classes
126 | self.conf = conf
127 | self.iou_thres = iou_thres
128 |
129 | def process_batch(self, detections, labels):
130 | """
131 | Return intersection-over-union (Jaccard index) of boxes.
132 | Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
133 | Arguments:
134 | detections (Array[N, 6]), x1, y1, x2, y2, conf, class
135 | labels (Array[M, 5]), class, x1, y1, x2, y2
136 | Returns:
137 | None, updates confusion matrix accordingly
138 | """
139 | detections = detections[detections[:, 4] > self.conf]
140 | gt_classes = labels[:, 0].int()
141 | detection_classes = detections[:, 5].int()
142 | iou = general.box_iou(labels[:, 1:], detections[:, :4])
143 |
144 | x = torch.where(iou > self.iou_thres)
145 | if x[0].shape[0]:
146 | matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
147 | if x[0].shape[0] > 1:
148 | matches = matches[matches[:, 2].argsort()[::-1]]
149 | matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
150 | matches = matches[matches[:, 2].argsort()[::-1]]
151 | matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
152 | else:
153 | matches = np.zeros((0, 3))
154 |
155 | n = matches.shape[0] > 0
156 | m0, m1, _ = matches.transpose().astype(np.int16)
157 | for i, gc in enumerate(gt_classes):
158 | j = m0 == i
159 | if n and sum(j) == 1:
160 | self.matrix[gc, detection_classes[m1[j]]] += 1 # correct
161 | else:
162 | self.matrix[self.nc, gc] += 1 # background FP
163 |
164 | if n:
165 | for i, dc in enumerate(detection_classes):
166 | if not any(m1 == i):
167 | self.matrix[dc, self.nc] += 1 # background FN
168 |
169 | def matrix(self):
170 | return self.matrix
171 |
172 | def plot(self, save_dir='', names=()):
173 | try:
174 | import seaborn as sn
175 |
176 | array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize
177 | array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
178 |
179 | fig = plt.figure(figsize=(12, 9), tight_layout=True)
180 | sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
181 | labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
182 | sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
183 | xticklabels=names + ['background FP'] if labels else "auto",
184 | yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
185 | fig.axes[0].set_xlabel('True')
186 | fig.axes[0].set_ylabel('Predicted')
187 | fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
188 | except Exception as e:
189 | pass
190 |
191 | def print(self):
192 | for i in range(self.nc + 1):
193 | print(' '.join(map(str, self.matrix[i])))
194 |
195 |
196 | # Plots ----------------------------------------------------------------------------------------------------------------
197 |
198 | def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
199 | # Precision-recall curve
200 | fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
201 | py = np.stack(py, axis=1)
202 |
203 | if 0 < len(names) < 21: # display per-class legend if < 21 classes
204 | for i, y in enumerate(py.T):
205 | ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
206 | else:
207 | ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
208 |
209 | ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
210 | ax.set_xlabel('Recall')
211 | ax.set_ylabel('Precision')
212 | ax.set_xlim(0, 1)
213 | ax.set_ylim(0, 1)
214 | plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
215 | fig.savefig(Path(save_dir), dpi=250)
216 |
217 |
218 | def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
219 | # Metric-confidence curve
220 | fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
221 |
222 | if 0 < len(names) < 21: # display per-class legend if < 21 classes
223 | for i, y in enumerate(py):
224 | ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
225 | else:
226 | ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
227 |
228 | y = py.mean(0)
229 | ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
230 | ax.set_xlabel(xlabel)
231 | ax.set_ylabel(ylabel)
232 | ax.set_xlim(0, 1)
233 | ax.set_ylim(0, 1)
234 | plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
235 | fig.savefig(Path(save_dir), dpi=250)
236 |
--------------------------------------------------------------------------------
/utils/plots.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: plots.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 | # Plotting utils
9 |
10 | import glob
11 | import math
12 | import os
13 | import random
14 | from copy import copy
15 | from pathlib import Path
16 |
17 | import cv2
18 | import matplotlib
19 | import matplotlib.pyplot as plt
20 | import numpy as np
21 | import pandas as pd
22 | import seaborn as sns
23 | import torch
24 | import yaml
25 | from PIL import Image, ImageDraw, ImageFont
26 | from scipy.signal import butter, filtfilt
27 |
28 | from utils.general import xywh2xyxy, xyxy2xywh
29 | from utils.metrics import fitness
30 |
31 | # Settings
32 | matplotlib.rc('font', **{'size': 11})
33 | matplotlib.use('Agg') # for writing to files only
34 |
35 |
36 | def color_list():
37 | # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb
38 | def hex2rgb(h):
39 | return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))
40 |
41 | return [hex2rgb(h) for h in matplotlib.colors.TABLEAU_COLORS.values()] # or BASE_ (8), CSS4_ (148), XKCD_ (949)
42 |
43 |
44 | def hist2d(x, y, n=100):
45 | # 2d histogram used in labels.png and evolve.png
46 | xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n)
47 | hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges))
48 | xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1)
49 | yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1)
50 | return np.log(hist[xidx, yidx])
51 |
52 |
53 | def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
54 | # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy
55 | def butter_lowpass(cutoff, fs, order):
56 | nyq = 0.5 * fs
57 | normal_cutoff = cutoff / nyq
58 | return butter(order, normal_cutoff, btype='low', analog=False)
59 |
60 | b, a = butter_lowpass(cutoff, fs, order=order)
61 | return filtfilt(b, a, data) # forward-backward filter
62 |
63 |
64 | def plot_one_box(x, img, color=None, label=None, line_thickness=3):
65 | # Plots one bounding box on image img
66 | tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness
67 | color = color or [random.randint(0, 255) for _ in range(3)]
68 | c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
69 | cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)
70 | if label:
71 | tf = max(tl - 1, 1) # font thickness
72 | t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
73 | c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3
74 | cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled
75 | cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA)
76 |
77 |
78 | def plot_one_box_PIL(box, img, color=None, label=None, line_thickness=None):
79 | img = Image.fromarray(img)
80 | draw = ImageDraw.Draw(img)
81 | line_thickness = line_thickness or max(int(min(img.size) / 200), 2)
82 | draw.rectangle(box, width=line_thickness, outline=tuple(color)) # plot
83 | if label:
84 | fontsize = max(round(max(img.size) / 40), 12)
85 | font = ImageFont.truetype("Arial.ttf", fontsize)
86 | txt_width, txt_height = font.getsize(label)
87 | draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=tuple(color))
88 | draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font)
89 | return np.asarray(img)
90 |
91 |
92 | def plot_wh_methods(): # from utils.plots import *; plot_wh_methods()
93 | # Compares the two methods for width-height anchor multiplication
94 | # https://github.com/ultralytics/yolov3/issues/168
95 | x = np.arange(-4.0, 4.0, .1)
96 | ya = np.exp(x)
97 | yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2
98 |
99 | fig = plt.figure(figsize=(6, 3), tight_layout=True)
100 | plt.plot(x, ya, '.-', label='YOLOv3')
101 | plt.plot(x, yb ** 2, '.-', label='YOLOR ^2')
102 | plt.plot(x, yb ** 1.6, '.-', label='YOLOR ^1.6')
103 | plt.xlim(left=-4, right=4)
104 | plt.ylim(bottom=0, top=6)
105 | plt.xlabel('input')
106 | plt.ylabel('output')
107 | plt.grid()
108 | plt.legend()
109 | fig.savefig('comparison.png', dpi=200)
110 |
111 |
112 | def output_to_target(output):
113 | # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
114 | targets = []
115 | for i, o in enumerate(output):
116 | for *box, conf, cls in o.cpu().numpy():
117 | targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf])
118 | return np.array(targets)
119 |
120 |
121 | def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16):
122 | # Plot image grid with labels
123 |
124 | if isinstance(images, torch.Tensor):
125 | images = images.cpu().float().numpy()
126 | if isinstance(targets, torch.Tensor):
127 | targets = targets.cpu().numpy()
128 |
129 | # un-normalise
130 | if np.max(images[0]) <= 1:
131 | images *= 255
132 |
133 | tl = 3 # line thickness
134 | tf = max(tl - 1, 1) # font thickness
135 | bs, _, h, w = images.shape # batch size, _, height, width
136 | bs = min(bs, max_subplots) # limit plot images
137 | ns = np.ceil(bs ** 0.5) # number of subplots (square)
138 |
139 | # Check if we should resize
140 | scale_factor = max_size / max(h, w)
141 | if scale_factor < 1:
142 | h = math.ceil(scale_factor * h)
143 | w = math.ceil(scale_factor * w)
144 |
145 | colors = color_list() # list of colors
146 | mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
147 | for i, img in enumerate(images):
148 | if i == max_subplots: # if last batch has fewer images than we expect
149 | break
150 |
151 | block_x = int(w * (i // ns))
152 | block_y = int(h * (i % ns))
153 |
154 | img = img.transpose(1, 2, 0)
155 | if scale_factor < 1:
156 | img = cv2.resize(img, (w, h))
157 |
158 | mosaic[block_y:block_y + h, block_x:block_x + w, :] = img
159 | if len(targets) > 0:
160 | image_targets = targets[targets[:, 0] == i]
161 | boxes = xywh2xyxy(image_targets[:, 2:6]).T
162 | classes = image_targets[:, 1].astype('int')
163 | labels = image_targets.shape[1] == 6 # labels if no conf column
164 | conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred)
165 |
166 | if boxes.shape[1]:
167 | if boxes.max() <= 1.01: # if normalized with tolerance 0.01
168 | boxes[[0, 2]] *= w # scale to pixels
169 | boxes[[1, 3]] *= h
170 | elif scale_factor < 1: # absolute coords need scale if image scales
171 | boxes *= scale_factor
172 | boxes[[0, 2]] += block_x
173 | boxes[[1, 3]] += block_y
174 | for j, box in enumerate(boxes.T):
175 | cls = int(classes[j])
176 | color = colors[cls % len(colors)]
177 | cls = names[cls] if names else cls
178 | if labels or conf[j] > 0.25: # 0.25 conf thresh
179 | label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j])
180 | plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl)
181 |
182 | # Draw image filename labels
183 | if paths:
184 | label = Path(paths[i]).name[:40] # trim to 40 char
185 | t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
186 | cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf,
187 | lineType=cv2.LINE_AA)
188 |
189 | # Image border
190 | cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3)
191 |
192 | if fname:
193 | r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size
194 | mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA)
195 | # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save
196 | Image.fromarray(mosaic).save(fname) # PIL save
197 | return mosaic
198 |
199 |
200 | def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
201 | # Plot LR simulating training for full epochs
202 | optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals
203 | y = []
204 | for _ in range(epochs):
205 | scheduler.step()
206 | y.append(optimizer.param_groups[0]['lr'])
207 | plt.plot(y, '.-', label='LR')
208 | plt.xlabel('epoch')
209 | plt.ylabel('LR')
210 | plt.grid()
211 | plt.xlim(0, epochs)
212 | plt.ylim(0)
213 | plt.savefig(Path(save_dir) / 'LR.png', dpi=200)
214 | plt.close()
215 |
216 |
217 | def plot_test_txt(): # from utils.plots import *; plot_test()
218 | # Plot test.txt histograms
219 | x = np.loadtxt('test.txt', dtype=np.float32)
220 | box = xyxy2xywh(x[:, :4])
221 | cx, cy = box[:, 0], box[:, 1]
222 |
223 | fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True)
224 | ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0)
225 | ax.set_aspect('equal')
226 | plt.savefig('hist2d.png', dpi=300)
227 |
228 | fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True)
229 | ax[0].hist(cx, bins=600)
230 | ax[1].hist(cy, bins=600)
231 | plt.savefig('hist1d.png', dpi=200)
232 |
233 |
234 | def plot_targets_txt(): # from utils.plots import *; plot_targets_txt()
235 | # Plot targets.txt histograms
236 | x = np.loadtxt('targets.txt', dtype=np.float32).T
237 | s = ['x targets', 'y targets', 'width targets', 'height targets']
238 | fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)
239 | ax = ax.ravel()
240 | for i in range(4):
241 | ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std()))
242 | ax[i].legend()
243 | ax[i].set_title(s[i])
244 | plt.savefig('targets.jpg', dpi=200)
245 |
246 |
247 | def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt()
248 | # Plot study.txt generated by test.py
249 | fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)
250 | # ax = ax.ravel()
251 |
252 | fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
253 | # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolor-p6', 'yolor-w6', 'yolor-e6', 'yolor-d6']]:
254 | for f in sorted(Path(path).glob('study*.txt')):
255 | y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
256 | x = np.arange(y.shape[1]) if x is None else np.array(x)
257 | s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)']
258 | # for i in range(7):
259 | # ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8)
260 | # ax[i].set_title(s[i])
261 |
262 | j = y[3].argmax() + 1
263 | ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8,
264 | label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
265 |
266 | ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
267 | 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet')
268 |
269 | ax2.grid(alpha=0.2)
270 | ax2.set_yticks(np.arange(20, 60, 5))
271 | ax2.set_xlim(0, 57)
272 | ax2.set_ylim(30, 55)
273 | ax2.set_xlabel('GPU Speed (ms/img)')
274 | ax2.set_ylabel('COCO AP val')
275 | ax2.legend(loc='lower right')
276 | plt.savefig(str(Path(path).name) + '.png', dpi=300)
277 |
278 |
279 | def plot_labels(labels, names=(), save_dir=Path(''), loggers=None):
280 | # plot dataset labels
281 | print('Plotting labels... ')
282 | c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes
283 | nc = int(c.max() + 1) # number of classes
284 | colors = color_list()
285 | x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height'])
286 |
287 | # seaborn correlogram
288 | sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9))
289 | plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200)
290 | plt.close()
291 |
292 | # matplotlib labels
293 | matplotlib.use('svg') # faster
294 | ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel()
295 | ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8)
296 | ax[0].set_ylabel('instances')
297 | if 0 < len(names) < 30:
298 | ax[0].set_xticks(range(len(names)))
299 | ax[0].set_xticklabels(names, rotation=90, fontsize=10)
300 | else:
301 | ax[0].set_xlabel('classes')
302 | sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9)
303 | sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9)
304 |
305 | # rectangles
306 | labels[:, 1:3] = 0.5 # center
307 | labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000
308 | img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255)
309 | for cls, *box in labels[:1000]:
310 | ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot
311 | ax[1].imshow(img)
312 | ax[1].axis('off')
313 |
314 | for a in [0, 1, 2, 3]:
315 | for s in ['top', 'right', 'left', 'bottom']:
316 | ax[a].spines[s].set_visible(False)
317 |
318 | plt.savefig(save_dir / 'labels.jpg', dpi=200)
319 | matplotlib.use('Agg')
320 | plt.close()
321 |
322 | # loggers
323 | for k, v in loggers.items() or {}:
324 | if k == 'wandb' and v:
325 | v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False)
326 |
327 |
328 | def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution()
329 | # Plot hyperparameter evolution results in evolve.txt
330 | with open(yaml_file) as f:
331 | hyp = yaml.load(f, Loader=yaml.SafeLoader)
332 | x = np.loadtxt('evolve.txt', ndmin=2)
333 | f = fitness(x)
334 | # weights = (f - f.min()) ** 2 # for weighted results
335 | plt.figure(figsize=(10, 12), tight_layout=True)
336 | matplotlib.rc('font', **{'size': 8})
337 | for i, (k, v) in enumerate(hyp.items()):
338 | y = x[:, i + 7]
339 | # mu = (y * weights).sum() / weights.sum() # best weighted result
340 | mu = y[f.argmax()] # best single result
341 | plt.subplot(6, 5, i + 1)
342 | plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none')
343 | plt.plot(mu, f.max(), 'k+', markersize=15)
344 | plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters
345 | if i % 5 != 0:
346 | plt.yticks([])
347 | print('%15s: %.3g' % (k, mu))
348 | plt.savefig('evolve.png', dpi=200)
349 | print('\nPlot saved as evolve.png')
350 |
351 |
352 | def profile_idetection(start=0, stop=0, labels=(), save_dir=''):
353 | # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection()
354 | ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel()
355 | s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS']
356 | files = list(Path(save_dir).glob('frames*.txt'))
357 | for fi, f in enumerate(files):
358 | try:
359 | results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows
360 | n = results.shape[1] # number of rows
361 | x = np.arange(start, min(stop, n) if stop else n)
362 | results = results[:, x]
363 | t = (results[0] - results[0].min()) # set t0=0s
364 | results[0] = x
365 | for i, a in enumerate(ax):
366 | if i < len(results):
367 | label = labels[fi] if len(labels) else f.stem.replace('frames_', '')
368 | a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5)
369 | a.set_title(s[i])
370 | a.set_xlabel('time (s)')
371 | # if fi == len(files) - 1:
372 | # a.set_ylim(bottom=0)
373 | for side in ['top', 'right']:
374 | a.spines[side].set_visible(False)
375 | else:
376 | a.remove()
377 | except Exception as e:
378 | print('Warning: Plotting error for %s; %s' % (f, e))
379 |
380 | ax[1].legend()
381 | plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200)
382 |
383 |
384 | def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay()
385 | # Plot training 'results*.txt', overlaying train and val losses
386 | s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends
387 | t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles
388 | for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')):
389 | results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T
390 | n = results.shape[1] # number of rows
391 | x = range(start, min(stop, n) if stop else n)
392 | fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True)
393 | ax = ax.ravel()
394 | for i in range(5):
395 | for j in [i, i + 5]:
396 | y = results[j, x]
397 | ax[i].plot(x, y, marker='.', label=s[j])
398 | # y_smooth = butter_lowpass_filtfilt(y)
399 | # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j])
400 |
401 | ax[i].set_title(t[i])
402 | ax[i].legend()
403 | ax[i].set_ylabel(f) if i == 0 else None # add filename
404 | fig.savefig(f.replace('.txt', '.png'), dpi=200)
405 |
406 |
407 | def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''):
408 | # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp')
409 | fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True)
410 | ax = ax.ravel()
411 | s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall',
412 | 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95']
413 | if bucket:
414 | # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id]
415 | files = ['results%g.txt' % x for x in id]
416 | c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id)
417 | os.system(c)
418 | else:
419 | files = list(Path(save_dir).glob('results*.txt'))
420 | assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir)
421 | for fi, f in enumerate(files):
422 | try:
423 | results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T
424 | n = results.shape[1] # number of rows
425 | x = range(start, min(stop, n) if stop else n)
426 | for i in range(10):
427 | y = results[i, x]
428 | if i in [0, 1, 2, 5, 6, 7]:
429 | y[y == 0] = np.nan # don't show zero loss values
430 | # y /= y[0] # normalize
431 | label = labels[fi] if len(labels) else f.stem
432 | ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8)
433 | ax[i].set_title(s[i])
434 | # if i in [5, 6, 7]: # share train and val loss y axes
435 | # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
436 | except Exception as e:
437 | print('Warning: Plotting error for %s; %s' % (f, e))
438 |
439 | ax[1].legend()
440 | fig.savefig(Path(save_dir) / 'results.png', dpi=200)
441 |
442 |
443 | def output_to_keypoint(output):
444 | # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
445 | targets = []
446 | for i, o in enumerate(output):
447 | kpts = o[:,6:]
448 | o = o[:,:6]
449 | for index, (*box, conf, cls) in enumerate(o.detach().cpu().numpy()):
450 | targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf, *list(kpts.detach().cpu().numpy()[index])])
451 | return np.array(targets)
452 |
453 |
454 | def plot_skeleton_kpts(im, kpts, steps, orig_shape=None):
455 | #Plot the skeleton and keypointsfor coco datatset
456 | palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102],
457 | [230, 230, 0], [255, 153, 255], [153, 204, 255],
458 | [255, 102, 255], [255, 51, 255], [102, 178, 255],
459 | [51, 153, 255], [255, 153, 153], [255, 102, 102],
460 | [255, 51, 51], [153, 255, 153], [102, 255, 102],
461 | [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0],
462 | [255, 255, 255]])
463 |
464 | skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12],
465 | [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3],
466 | [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]]
467 |
468 | pose_limb_color = palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]]
469 | pose_kpt_color = palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]]
470 | radius = 5
471 | num_kpts = len(kpts) // steps
472 |
473 | for kid in range(num_kpts):
474 | r, g, b = pose_kpt_color[kid]
475 | x_coord, y_coord = kpts[steps * kid], kpts[steps * kid + 1]
476 | if not (x_coord % 640 == 0 or y_coord % 640 == 0):
477 | if steps == 3:
478 | conf = kpts[steps * kid + 2]
479 | if conf < 0.5:
480 | continue
481 | cv2.circle(im, (int(x_coord), int(y_coord)), radius, (int(r), int(g), int(b)), -1)
482 |
483 | for sk_id, sk in enumerate(skeleton):
484 | r, g, b = pose_limb_color[sk_id]
485 | pos1 = (int(kpts[(sk[0]-1)*steps]), int(kpts[(sk[0]-1)*steps+1]))
486 | pos2 = (int(kpts[(sk[1]-1)*steps]), int(kpts[(sk[1]-1)*steps+1]))
487 | if steps == 3:
488 | conf1 = kpts[(sk[0]-1)*steps+2]
489 | conf2 = kpts[(sk[1]-1)*steps+2]
490 | if conf1<0.5 or conf2<0.5:
491 | continue
492 | if pos1[0]%640 == 0 or pos1[1]%640==0 or pos1[0]<0 or pos1[1]<0:
493 | continue
494 | if pos2[0] % 640 == 0 or pos2[1] % 640 == 0 or pos2[0]<0 or pos2[1]<0:
495 | continue
496 | cv2.line(im, pos1, pos2, (int(r), int(g), int(b)), thickness=2)
497 |
--------------------------------------------------------------------------------
/utils/thresh_display.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: thresh_display.py
4 | Origin: Netflora (https://github.com/NetFlora/Netflora)
5 |
6 | """
7 |
8 |
9 | from ipywidgets import SelectionSlider, interact
10 | from IPython.display import display, clear_output
11 | from PIL import Image
12 | import glob
13 | import os
14 |
15 | class ImageDisplayer:
16 | def __init__(self, base_dir='runs/detect', save_dir='results/imagens_threshold', thresholds=None, image_limit=5):
17 | self.base_dir = base_dir
18 | self.save_dir = save_dir
19 | self.thresholds = thresholds if thresholds is not None else [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50]
20 | self.image_limit = image_limit
21 |
22 |
23 | if not os.path.exists(self.save_dir):
24 | os.makedirs(self.save_dir)
25 |
26 |
27 | self.preprocess_and_save_all_images()
28 |
29 |
30 | self.setup_slider()
31 |
32 | def setup_slider(self):
33 |
34 | self.threshold_slider = SelectionSlider(
35 | options=[(f'{value}', value) for value in self.thresholds],
36 | value=self.thresholds[0],
37 | description='Threshold:',
38 | continuous_update=True,
39 | readout=True
40 | )
41 | interact(self.display_saved_image, threshold=self.threshold_slider)
42 |
43 | def preprocess_and_save_all_images(self):
44 |
45 | for threshold in self.thresholds:
46 | self.process_images_for_threshold(threshold)
47 |
48 | def process_images_for_threshold(self, threshold):
49 |
50 | image_dir = f'{self.base_dir}/{threshold:.2f}'
51 | images = glob.glob(os.path.join(image_dir, '*.jpg'))[:self.image_limit]
52 |
53 | if images:
54 | self.create_and_save_composite_image(images, threshold)
55 | else:
56 | print(f'Nenhuma imagem encontrada para threshold {threshold:.2f}.')
57 |
58 | def create_and_save_composite_image(self, images, threshold):
59 |
60 | size = (640, 640)
61 | composite_img = Image.new('RGB', (size[0] * len(images), size[1]))
62 |
63 | for i, img_path in enumerate(images):
64 | img = Image.open(img_path).resize(size, Image.Resampling.LANCZOS)
65 | composite_img.paste(img, (i * size[0], 0))
66 |
67 | composite_save_path = os.path.join(self.save_dir, f'composite_threshold_{threshold:.2f}.jpg')
68 | composite_img.save(composite_save_path)
69 |
70 | def display_saved_image(self, threshold):
71 |
72 | clear_output(wait=True)
73 | composite_path = os.path.join(self.save_dir, f'composite_threshold_{threshold:.2f}.jpg')
74 |
75 | if os.path.exists(composite_path):
76 | img = Image.open(composite_path)
77 | display(img)
78 | print(f'Threshold: {threshold}')
79 | else:
80 | print(f'Nenhuma imagem composta encontrada para threshold {threshold:.2f}.')
81 |
82 |
--------------------------------------------------------------------------------
/utils/torch_utils.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | File Name: torch_utils.py
4 | Origin: yolov7 (https://github.com/WongKinYiu/yolov7)
5 |
6 | """
7 |
8 |
9 | # YOLOR PyTorch utils
10 |
11 | import datetime
12 | import logging
13 | import math
14 | import os
15 | import platform
16 | import subprocess
17 | import time
18 | from contextlib import contextmanager
19 | from copy import deepcopy
20 | from pathlib import Path
21 |
22 | import torch
23 | import torch.backends.cudnn as cudnn
24 | import torch.nn as nn
25 | import torch.nn.functional as F
26 | import torchvision
27 |
28 | try:
29 | import thop # for FLOPS computation
30 | except ImportError:
31 | thop = None
32 | logger = logging.getLogger(__name__)
33 |
34 |
35 | @contextmanager
36 | def torch_distributed_zero_first(local_rank: int):
37 | """
38 | Decorator to make all processes in distributed training wait for each local_master to do something.
39 | """
40 | if local_rank not in [-1, 0]:
41 | torch.distributed.barrier()
42 | yield
43 | if local_rank == 0:
44 | torch.distributed.barrier()
45 |
46 |
47 | def init_torch_seeds(seed=0):
48 | # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
49 | torch.manual_seed(seed)
50 | if seed == 0: # slower, more reproducible
51 | cudnn.benchmark, cudnn.deterministic = False, True
52 | else: # faster, less reproducible
53 | cudnn.benchmark, cudnn.deterministic = True, False
54 |
55 |
56 | def date_modified(path=__file__):
57 | # return human-readable file modification date, i.e. '2021-3-26'
58 | t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
59 | return f'{t.year}-{t.month}-{t.day}'
60 |
61 |
62 | def git_describe(path=Path(__file__).parent): # path must be a directory
63 | # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
64 | s = f'git -C {path} describe --tags --long --always'
65 | try:
66 | return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
67 | except subprocess.CalledProcessError as e:
68 | return '' # not a git repository
69 |
70 |
71 | def select_device(device='', batch_size=None):
72 | # Initialize device configuration string
73 | s = f'YOLOR 🚀 {git_describe() or date_modified()} torch {torch.__version__} '
74 |
75 | # Check if the requested device is CPU
76 | cpu = device.lower() == 'cpu'
77 | if cpu:
78 | os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # Ensure torch.cuda.is_available() returns False
79 | elif device:
80 | # Set GPU device environment variable
81 | os.environ['CUDA_VISIBLE_DEVICES'] = device
82 | if not torch.cuda.is_available():
83 | logger.warning(f'CUDA unavailable, falling back to CPU. Invalid device {device} requested')
84 | device = 'cpu' # Fallback to CPU
85 | cpu = True
86 |
87 | # Check availability of CUDA
88 | cuda = not cpu and torch.cuda.is_available()
89 | if cuda:
90 | n = torch.cuda.device_count()
91 | if n > 1 and batch_size: # Ensure batch size is compatible with device count
92 | assert batch_size % n == 0, f'Batch-size {batch_size} is not a multiple of GPU count {n}'
93 | space = ' ' * len(s)
94 | for i, d in enumerate(device.split(',') if device else range(n)):
95 | p = torch.cuda.get_device_properties(i)
96 | s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n"
97 | else:
98 | s += 'CPU\n'
99 |
100 | logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s)
101 | return torch.device('cuda:0' if cuda else 'cpu')
102 |
103 |
104 | def time_synchronized():
105 | # pytorch-accurate time
106 | if torch.cuda.is_available():
107 | torch.cuda.synchronize()
108 | return time.time()
109 |
110 |
111 | def profile(x, ops, n=100, device=None):
112 | # profile a pytorch module or list of modules. Example usage:
113 | # x = torch.randn(16, 3, 640, 640) # input
114 | # m1 = lambda x: x * torch.sigmoid(x)
115 | # m2 = nn.SiLU()
116 | # profile(x, [m1, m2], n=100) # profile speed over 100 iterations
117 |
118 | device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
119 | x = x.to(device)
120 | x.requires_grad = True
121 | print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '')
122 | print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}")
123 | for m in ops if isinstance(ops, list) else [ops]:
124 | m = m.to(device) if hasattr(m, 'to') else m # device
125 | m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type
126 | dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward
127 | try:
128 | flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS
129 | except:
130 | flops = 0
131 |
132 | for _ in range(n):
133 | t[0] = time_synchronized()
134 | y = m(x)
135 | t[1] = time_synchronized()
136 | try:
137 | _ = y.sum().backward()
138 | t[2] = time_synchronized()
139 | except: # no backward method
140 | t[2] = float('nan')
141 | dtf += (t[1] - t[0]) * 1000 / n # ms per op forward
142 | dtb += (t[2] - t[1]) * 1000 / n # ms per op backward
143 |
144 | s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
145 | s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
146 | p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
147 | print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}')
148 |
149 |
150 | def is_parallel(model):
151 | return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
152 |
153 |
154 | def intersect_dicts(da, db, exclude=()):
155 | # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
156 | return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
157 |
158 |
159 | def initialize_weights(model):
160 | for m in model.modules():
161 | t = type(m)
162 | if t is nn.Conv2d:
163 | pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
164 | elif t is nn.BatchNorm2d:
165 | m.eps = 1e-3
166 | m.momentum = 0.03
167 | elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
168 | m.inplace = True
169 |
170 |
171 | def find_modules(model, mclass=nn.Conv2d):
172 | # Finds layer indices matching module class 'mclass'
173 | return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
174 |
175 |
176 | def sparsity(model):
177 | # Return global model sparsity
178 | a, b = 0., 0.
179 | for p in model.parameters():
180 | a += p.numel()
181 | b += (p == 0).sum()
182 | return b / a
183 |
184 |
185 | def prune(model, amount=0.3):
186 | # Prune model to requested global sparsity
187 | import torch.nn.utils.prune as prune
188 | print('Pruning model... ', end='')
189 | for name, m in model.named_modules():
190 | if isinstance(m, nn.Conv2d):
191 | prune.l1_unstructured(m, name='weight', amount=amount) # prune
192 | prune.remove(m, 'weight') # make permanent
193 | print(' %.3g global sparsity' % sparsity(model))
194 |
195 |
196 | def fuse_conv_and_bn(conv, bn):
197 | # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
198 | fusedconv = nn.Conv2d(conv.in_channels,
199 | conv.out_channels,
200 | kernel_size=conv.kernel_size,
201 | stride=conv.stride,
202 | padding=conv.padding,
203 | groups=conv.groups,
204 | bias=True).requires_grad_(False).to(conv.weight.device)
205 |
206 | # prepare filters
207 | w_conv = conv.weight.clone().view(conv.out_channels, -1)
208 | w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
209 | fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
210 |
211 | # prepare spatial bias
212 | b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
213 | b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
214 | fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
215 |
216 | return fusedconv
217 |
218 |
219 | def model_info(model, verbose=False, img_size=640):
220 | # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
221 | n_p = sum(x.numel() for x in model.parameters()) # number parameters
222 | n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
223 | if verbose:
224 | print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
225 | for i, (name, p) in enumerate(model.named_parameters()):
226 | name = name.replace('module_list.', '')
227 | print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
228 | (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
229 |
230 | try: # FLOPS
231 | from thop import profile
232 | stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
233 | img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
234 | flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS
235 | img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
236 | fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS
237 | except (ImportError, Exception):
238 | fs = ''
239 |
240 | logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
241 |
242 |
243 | def load_classifier(name='resnet101', n=2):
244 | # Loads a pretrained model reshaped to n-class output
245 | model = torchvision.models.__dict__[name](pretrained=True)
246 |
247 | # ResNet model properties
248 | # input_size = [3, 224, 224]
249 | # input_space = 'RGB'
250 | # input_range = [0, 1]
251 | # mean = [0.485, 0.456, 0.406]
252 | # std = [0.229, 0.224, 0.225]
253 |
254 | # Reshape output to n classes
255 | filters = model.fc.weight.shape[1]
256 | model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
257 | model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
258 | model.fc.out_features = n
259 | return model
260 |
261 |
262 | def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
263 | # scales img(bs,3,y,x) by ratio constrained to gs-multiple
264 | if ratio == 1.0:
265 | return img
266 | else:
267 | h, w = img.shape[2:]
268 | s = (int(h * ratio), int(w * ratio)) # new size
269 | img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
270 | if not same_shape: # pad/crop img
271 | h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
272 | return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
273 |
274 |
275 | def copy_attr(a, b, include=(), exclude=()):
276 | # Copy attributes from b to a, options to only include [...] and to exclude [...]
277 | for k, v in b.__dict__.items():
278 | if (len(include) and k not in include) or k.startswith('_') or k in exclude:
279 | continue
280 | else:
281 | setattr(a, k, v)
282 |
283 |
284 | class ModelEMA:
285 | """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
286 | Keep a moving average of everything in the model state_dict (parameters and buffers).
287 | This is intended to allow functionality like
288 | https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
289 | A smoothed version of the weights is necessary for some training schemes to perform well.
290 | This class is sensitive where it is initialized in the sequence of model init,
291 | GPU assignment and distributed training wrappers.
292 | """
293 |
294 | def __init__(self, model, decay=0.9999, updates=0):
295 | # Create EMA
296 | self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
297 | # if next(model.parameters()).device.type != 'cpu':
298 | # self.ema.half() # FP16 EMA
299 | self.updates = updates # number of EMA updates
300 | self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
301 | for p in self.ema.parameters():
302 | p.requires_grad_(False)
303 |
304 | def update(self, model):
305 | # Update EMA parameters
306 | with torch.no_grad():
307 | self.updates += 1
308 | d = self.decay(self.updates)
309 |
310 | msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
311 | for k, v in self.ema.state_dict().items():
312 | if v.dtype.is_floating_point:
313 | v *= d
314 | v += (1. - d) * msd[k].detach()
315 |
316 | def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
317 | # Update EMA attributes
318 | copy_attr(self.ema, model, include, exclude)
319 |
320 |
321 | class BatchNormXd(torch.nn.modules.batchnorm._BatchNorm):
322 | def _check_input_dim(self, input):
323 | # The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc
324 | # is this method that is overwritten by the sub-class
325 | # This original goal of this method was for tensor sanity checks
326 | # If you're ok bypassing those sanity checks (eg. if you trust your inference
327 | # to provide the right dimensional inputs), then you can just use this method
328 | # for easy conversion from SyncBatchNorm
329 | # (unfortunately, SyncBatchNorm does not store the original class - if it did
330 | # we could return the one that was originally created)
331 | return
332 |
333 | def revert_sync_batchnorm(module):
334 | # this is very similar to the function that it is trying to revert:
335 | # https://github.com/pytorch/pytorch/blob/c8b3686a3e4ba63dc59e5dcfe5db3430df256833/torch/nn/modules/batchnorm.py#L679
336 | module_output = module
337 | if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm):
338 | new_cls = BatchNormXd
339 | module_output = BatchNormXd(module.num_features,
340 | module.eps, module.momentum,
341 | module.affine,
342 | module.track_running_stats)
343 | if module.affine:
344 | with torch.no_grad():
345 | module_output.weight = module.weight
346 | module_output.bias = module.bias
347 | module_output.running_mean = module.running_mean
348 | module_output.running_var = module.running_var
349 | module_output.num_batches_tracked = module.num_batches_tracked
350 | if hasattr(module, "qconfig"):
351 | module_output.qconfig = module.qconfig
352 | for name, child in module.named_children():
353 | module_output.add_module(name, revert_sync_batchnorm(child))
354 | del module
355 | return module_output
356 |
357 |
358 | class TracedModel(nn.Module):
359 |
360 | def __init__(self, model=None, device=None, img_size=(640,640)):
361 | super(TracedModel, self).__init__()
362 |
363 | print(" Convert model to Traced-model... ")
364 | self.stride = model.stride
365 | self.names = model.names
366 | self.model = model
367 |
368 | self.model = revert_sync_batchnorm(self.model)
369 | self.model.to('cpu')
370 | self.model.eval()
371 |
372 | self.detect_layer = self.model.model[-1]
373 | self.model.traced = True
374 |
375 | rand_example = torch.rand(1, 3, img_size, img_size)
376 |
377 | traced_script_module = torch.jit.trace(self.model, rand_example, strict=False)
378 | #traced_script_module = torch.jit.script(self.model)
379 | traced_script_module.save("traced_model.pt")
380 | print(" traced_script_module saved! ")
381 | self.model = traced_script_module
382 | self.model.to(device)
383 | self.detect_layer.to(device)
384 | print(" model is traced! \n")
385 |
386 | def forward(self, x, augment=False, profile=False):
387 | out = self.model(x)
388 | out = self.detect_layer(out)
389 | return out
390 |
--------------------------------------------------------------------------------