├── LICENSE ├── README.md ├── View_your_training_results.ipynb ├── images ├── README.md └── thunder_compress_view.PNG ├── td_args ├── base_args.py └── readme.md ├── td_classes └── readme.md ├── td_datasets └── readme.md ├── td_optimizers ├── laprop.py └── readme.md └── td_utils ├── predict_utils.py ├── readme.md └── thunder_file_utils.py /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Thunder-Detr 2 | (unofficial) - customized fork of DETR, optimized and tuned for intelligent obj detection on 'real world' custom datasets 3 | 4 | This is a customized framework based on FB AI's DETR, but with a number of improvements/modifications to optimize it for obj detection on your own datasets. 5 | I had started a colab to show how to do this, but with more and more customizations piling up requiring more and more code modifications to DETR core...
it became clear building a codebase focused on handling custom datasets would be better and faster for all. 6 | Thus, Thunder-Detr was born, 8/1/2020. 7 | 8 | Updates: 9 | 8/22/20 - add thunder_file_utils.py.
10 | Adds a coco_compressor to remap category_ids to contiguous values and rebase to zero.
11 | Adds a show_catids to view the categories in a json file and shows the proper "num_classes" value for training DETR with.
12 | Usage:
13 | ![](https://github.com/lessw2020/Thunder-Detr/blob/master/images/thunder_compress_view.PNG) 14 | 15 |
16 | Various changes to improve results built into Thunder-Detr:
17 | 1 - recommend LaProp optimizer vs AdamW.
18 | 2 - recommend bs of 4 (vs default 2 in DETR)
19 | 3 - recommend ciou over giou default of DETR
20 | 4 - recommend additional augmentations esp colorjitter
21 | 22 | 23 | 24 | 25 | 26 | 27 | -------------------------------------------------------------------------------- /images/README.md: -------------------------------------------------------------------------------- 1 | placeholder for images dir. This is to house images used in other readmes. 2 | -------------------------------------------------------------------------------- /images/thunder_compress_view.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lessw2020/Thunder-Detr/5e5052fda107c671c118e941afd621f90b242300/images/thunder_compress_view.PNG -------------------------------------------------------------------------------- /td_args/base_args.py: -------------------------------------------------------------------------------- 1 | # config file for custom Thunder-detr obj detection 2 | # 3 | 4 | import argparse 5 | 6 | def get_base_args(): 7 | parser = argparse.ArgumentParser('Set transformer detector', add_help=False) 8 | 9 | parser.add_argument('--num_classes', default=1, type=int) 10 | 11 | parser.add_argument('--lr', default=1e-4, type=float) 12 | parser.add_argument('--lr_backbone', default=1e-5, type=float) 13 | parser.add_argument('--batch_size', default=4, type=int) 14 | parser.add_argument('--weight_decay', default=1e-4, type=float) 15 | parser.add_argument('--epochs', default=50, type=int) 16 | parser.add_argument('--lr_drop', default=30, type=int) 17 | parser.add_argument('--clip_max_norm', default=0.1, type=float, 18 | help='gradient clipping max norm') 19 | 20 | # Model parameters 21 | parser.add_argument('--frozen_weights', type=str, default=None, 22 | help="Path to the pretrained model. If set, only the mask head will be trained") 23 | # * Backbone 24 | parser.add_argument('--backbone', default='resnet50', type=str, 25 | help="Name of the convolutional backbone to use") 26 | parser.add_argument('--dilation', action='store_true', 27 | help="If true, we replace stride with dilation in the last convolutional block (DC5)") 28 | parser.add_argument('--position_embedding', default='sine', type=str, choices=('sine', 'learned'), 29 | help="Type of positional embedding to use on top of the image features") 30 | 31 | # * Transformer 32 | parser.add_argument('--enc_layers', default=6, type=int, 33 | help="Number of encoding layers in the transformer") 34 | parser.add_argument('--dec_layers', default=6, type=int, 35 | help="Number of decoding layers in the transformer") 36 | parser.add_argument('--dim_feedforward', default=2048, type=int, 37 | help="Intermediate size of the feedforward layers in the transformer blocks") 38 | parser.add_argument('--hidden_dim', default=256, type=int, 39 | help="Size of the embeddings (dimension of the transformer)") 40 | parser.add_argument('--dropout', default=0.1, type=float, 41 | help="Dropout applied in the transformer") 42 | parser.add_argument('--nheads', default=8, type=int, 43 | help="Number of attention heads inside the transformer's attentions") 44 | parser.add_argument('--num_queries', default=100, type=int, 45 | help="Number of query slots") 46 | parser.add_argument('--pre_norm', action='store_true') 47 | 48 | # * Segmentation 49 | parser.add_argument('--masks', action='store_true', 50 | help="Train segmentation head if the flag is provided") 51 | 52 | # Loss 53 | parser.add_argument('--no_aux_loss', dest='aux_loss', action='store_false', 54 | help="Disables auxiliary decoding losses (loss at each layer)") 55 | # * Matcher 56 | parser.add_argument('--set_cost_class', default=1, type=float, 57 | help="Class coefficient in the matching cost") 58 | parser.add_argument('--set_cost_bbox', default=5, type=float, 59 | help="L1 box coefficient in the matching cost") 60 | parser.add_argument('--set_cost_giou', default=2, type=float, 61 | help="giou box coefficient in the matching cost") 62 | # * Loss coefficients 63 | parser.add_argument('--mask_loss_coef', default=1, type=float) 64 | parser.add_argument('--dice_loss_coef', default=1, type=float) 65 | parser.add_argument('--bbox_loss_coef', default=5, type=float) 66 | parser.add_argument('--giou_loss_coef', default=2, type=float) 67 | parser.add_argument('--eos_coef', default=0.1, type=float, 68 | help="Relative classification weight of the no-object class") 69 | 70 | # dataset parameters 71 | parser.add_argument('--dataset_file', default='coco') 72 | parser.add_argument('--coco_path', type=str) 73 | parser.add_argument('--coco_panoptic_path', type=str) 74 | parser.add_argument('--remove_difficult', action='store_true') 75 | 76 | parser.add_argument('--output_dir', default='', 77 | help='path where to save, empty for no saving') 78 | parser.add_argument('--device', default='cuda', 79 | help='device to use for training / testing') 80 | parser.add_argument('--seed', default=2021, type=int) 81 | parser.add_argument('--resume', default='', help='resume from checkpoint') 82 | parser.add_argument('--start_epoch', default=0, type=int, metavar='N', 83 | help='start epoch') 84 | parser.add_argument('--eval', action='store_true') 85 | parser.add_argument('--num_workers', default=2, type=int) 86 | 87 | # distributed training parameters 88 | parser.add_argument('--world_size', default=1, type=int, 89 | help='number of distributed processes') 90 | parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') 91 | return parser 92 | -------------------------------------------------------------------------------- /td_args/readme.md: -------------------------------------------------------------------------------- 1 | this folder holds the respective args passed in for a specific dataset 2 | -------------------------------------------------------------------------------- /td_classes/readme.md: -------------------------------------------------------------------------------- 1 | folder for custom dataset classes 2 | -------------------------------------------------------------------------------- /td_datasets/readme.md: -------------------------------------------------------------------------------- 1 | folder to hold your custom datasets 2 | -------------------------------------------------------------------------------- /td_optimizers/laprop.py: -------------------------------------------------------------------------------- 1 | from torch.optim import Optimizer 2 | import math 3 | import torch 4 | 5 | # source: https://github.com/Z-T-WANG/LaProp-Optimizer 6 | # paper: https://arxiv.org/abs/2002.04839 7 | 8 | class LaProp(Optimizer): 9 | def __init__(self, params, lr=4e-4, betas=(0.9, 0.999), eps=1e-15, 10 | weight_decay=0, amsgrad=False, centered=False): 11 | 12 | self.steps_before_using_centered = 10 13 | 14 | if not 0.0 <= lr: 15 | raise ValueError("Invalid learning rate: {}".format(lr)) 16 | if not 0.0 <= eps: 17 | raise ValueError("Invalid epsilon value: {}".format(eps)) 18 | if not 0.0 <= betas[0] < 1.0: 19 | raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) 20 | if not 0.0 <= betas[1] < 1.0: 21 | raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) 22 | defaults = dict(lr=lr, betas=betas, eps=eps, 23 | weight_decay=weight_decay, amsgrad=amsgrad, centered=centered) 24 | super(LaProp, self).__init__(params, defaults) 25 | 26 | def step(self, closure=None): 27 | """Performs a single optimization step. 28 | 29 | Arguments: 30 | closure (callable, optional): A closure that reevaluates the model 31 | and returns the loss. 32 | """ 33 | loss = None 34 | if closure is not None: 35 | loss = closure() 36 | 37 | for group in self.param_groups: 38 | for p in group['params']: 39 | if p.grad is None: 40 | continue 41 | grad = p.grad.data 42 | if grad.is_sparse: 43 | raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead') 44 | amsgrad = group['amsgrad'] 45 | centered = group['centered'] 46 | 47 | state = self.state[p] 48 | 49 | # State initialization 50 | if len(state) == 0: 51 | state['step'] = 0 52 | # Exponential moving average of gradient values 53 | state['exp_avg'] = torch.zeros_like(p.data) 54 | # Exponential moving average of learning rates 55 | state['exp_avg_lr_1'] = 0.; state['exp_avg_lr_2'] = 0. 56 | # Exponential moving average of squared gradient values 57 | state['exp_avg_sq'] = torch.zeros_like(p.data) 58 | if centered: 59 | # Exponential moving average of gradient values as calculated by beta2 60 | state['exp_mean_avg_beta2'] = torch.zeros_like(p.data) 61 | if amsgrad: 62 | # Maintains max of all exp. moving avg. of sq. grad. values 63 | state['max_exp_avg_sq'] = torch.zeros_like(p.data) 64 | 65 | exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] 66 | if centered: 67 | exp_mean_avg_beta2 = state['exp_mean_avg_beta2'] 68 | if amsgrad: 69 | max_exp_avg_sq = state['max_exp_avg_sq'] 70 | beta1, beta2 = group['betas'] 71 | 72 | state['step'] += 1 73 | 74 | # Decay the first and second moment running average coefficient 75 | exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) 76 | 77 | state['exp_avg_lr_1'] = state['exp_avg_lr_1'] * beta1 + (1 - beta1) * group['lr'] 78 | state['exp_avg_lr_2'] = state['exp_avg_lr_2'] * beta2 + (1 - beta2) 79 | 80 | bias_correction1 = state['exp_avg_lr_1'] / group['lr'] if group['lr']!=0. else 1. #1 - beta1 ** state['step'] 81 | step_size = 1 / bias_correction1 82 | 83 | bias_correction2 = state['exp_avg_lr_2'] 84 | 85 | denom = exp_avg_sq 86 | if centered: 87 | exp_mean_avg_beta2.mul_(beta2).add_(1 - beta2, grad) 88 | if state['step'] > self.steps_before_using_centered: 89 | mean = exp_mean_avg_beta2 ** 2 90 | denom = denom - mean 91 | 92 | if amsgrad: 93 | if not (centered and state['step'] <= self.steps_before_using_centered): 94 | # Maintains the maximum of all (centered) 2nd moment running avg. till now 95 | torch.max(max_exp_avg_sq, denom, out=max_exp_avg_sq) 96 | # Use the max. for normalizing running avg. of gradient 97 | denom = max_exp_avg_sq 98 | 99 | denom = denom.div(bias_correction2).sqrt_().add_(group['eps']) 100 | step_of_this_grad = grad / denom 101 | exp_avg.mul_(beta1).add_( (1 - beta1) * group['lr'], step_of_this_grad) 102 | 103 | p.data.add_(-step_size, exp_avg ) 104 | if group['weight_decay'] != 0: 105 | p.data.add_( - group['weight_decay'], p.data) 106 | 107 | return loss 108 | -------------------------------------------------------------------------------- /td_optimizers/readme.md: -------------------------------------------------------------------------------- 1 | holds custom optimizers (lapropw, adahessian, etc). 2 | for laprop gradient clipping likely is not needed. 3 | will also test diffgrad soon. 4 | -------------------------------------------------------------------------------- /td_utils/predict_utils.py: -------------------------------------------------------------------------------- 1 | # thunder-detr predict_utils 2 | # 3 | 4 | import numpy as np 5 | from PIL import Image 6 | import torchvision.transforms as tv 7 | 8 | # use torchvision transforms for image prep... 9 | 10 | resize_transform = tv.Resize(800) 11 | 12 | without_resize_transform = tv.Compose([ 13 | tv.ToTensor(), 14 | tv.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) #Imagenet 15 | ]) 16 | 17 | val_transform = tv.Compose([ 18 | tv.Resize(800), 19 | tv.ToTensor(), 20 | tv.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) #Imagenet 21 | ]) 22 | 23 | def make_cpu_array(box): 24 | '''move tensor from gpu to cpu and convert to array''' 25 | 26 | if not type(box) is np.ndarray: 27 | try: 28 | box = box.cpu().detach() 29 | except: 30 | print(f"failed to convert box to np.ndarray") 31 | return None 32 | 33 | box = np.asarray(box) 34 | return box 35 | 36 | 37 | 38 | # actual predictions... 39 | 40 | def thunder_detect(orig_img, model, min_confidence = .75): 41 | # orig_img is PIL image direct from drive..so first make a copy that we'll pass in as smaller image to detect 42 | pred_img = orig_img.copy() 43 | pred_img = resize_transform(pred_img) #resize to prediction size 44 | 45 | scores, boxes = detect(pred_img, model, transform=val_transform .. #regular detr detect goes here) 46 | 47 | if len(boxes): 48 | boxes = [upscale_boxes(box, pred_img.size, orig_img.size) for box in boxes] 49 | boxes = torch.Tensor(boxes) 50 | 51 | return scores,boxes 52 | 53 | # plot with upscaled boxes 54 | 55 | 56 | def upscale_box(box, pred_size, orig_size): 57 | '''upscale box from image used for prediction back to actual image size''' 58 | 59 | x_scale = orig_size[0]/pred_size[0] 60 | y_scale = orig_size[1]/pred_size[1] 61 | 62 | # needs to be moved to cpu if run on gpu... 63 | #box = box.cpu().detach() 64 | # or 65 | box = make_cpu_array(box) 66 | 67 | x, y, w, h = np.asarray(box) 68 | 69 | # upscale all dimensions 70 | x *= x_scale 71 | y *= y_scale 72 | w *= x_scale 73 | h *= y_scale 74 | 75 | upbox = np.asarray((x, y, w, h)) 76 | return upbox 77 | -------------------------------------------------------------------------------- /td_utils/readme.md: -------------------------------------------------------------------------------- 1 | folder for holding various utilities for custom datasets 2 | need to add detection utils and various visual utils 3 | -------------------------------------------------------------------------------- /td_utils/thunder_file_utils.py: -------------------------------------------------------------------------------- 1 | # thunder_file_utils 2 | # @lessw2020 3 | 4 | import json 5 | from pathlib import Path, PurePath 6 | 7 | def coco_compressor(fn_in, fn_out=None, prefix='compact_'): 8 | '''opens coco format json file, compresses and re-indexes category_ids to start at 0, 9 | related image_ids and cat_id annotations to new mapping, 10 | saves to fn_out''' 11 | 12 | # change str to Path if needed 13 | if isinstance(fn_in,str): 14 | fn_in = Path(fn_in) 15 | 16 | with open(fn_in) as f: 17 | j = json.load(f) 18 | 19 | remap_catid = {} 20 | remap_imageid = {} 21 | 22 | for i, item in enumerate(j['categories']): 23 | # save old_id to new mapping 24 | remap_catid[item['id']] = i 25 | # write new id 26 | item['id'] = i 27 | 28 | for i, item in enumerate(j['images']): 29 | remap_imageid[item['id']] = i 30 | item['id'] = i 31 | 32 | for i, item in enumerate(j['annotations']): 33 | item['image_id'] = remap_imageid[item['image_id']] 34 | item['category_id'] = remap_catid[item['category_id']] 35 | item['id'] = i 36 | 37 | if not fn_out: 38 | fn_out = prefix+fn_in.name 39 | 40 | print(f"Saving {fn_in} to {fn_out}") 41 | 42 | with open(fn_out, 'w') as f: 43 | json.dump(j, f) 44 | 45 | 46 | def show_catids(fn): 47 | '''display the category_ids for a given json file 48 | compute the proper num_classes for detr. 49 | ''' 50 | max_catid=0 51 | 52 | if not isinstance(fn,PurePath): 53 | fn = Path(fn) 54 | 55 | with open(fn) as f: 56 | j = json.load(f) 57 | 58 | for i, item in enumerate(j['categories']): 59 | print(item) 60 | if item['id']>max_catid: 61 | max_catid = item['id'] 62 | 63 | total_catids = i+1 # i is zero based 64 | num_classes = total_catids if total_catids > max_catid else max_catid 65 | 66 | print(f"\ntotal categories = {total_catids}. Use {num_classes} for detr 'num_classes'") 67 | 68 | --------------------------------------------------------------------------------