├── .gitignore ├── LICENSE ├── README.md ├── assets ├── Cascade_R-CNN_Net.png ├── Consistent Detection.png ├── Consistent Localization.png ├── RefineDet_Net.png ├── RefineDet_TCB.png ├── consistent_optimization_SOTA.png ├── consistent_optimization_cls_reg.png ├── consistent_optimization_deep_scale.png ├── consistent_optimization_exp1.png ├── consistent_optimization_in_out_iou.png ├── consistent_optimization_misalignment.png ├── consistent_optimization_net.png ├── consistent_optimization_ssd.png ├── detnet_1.png ├── detnet_2.png ├── refinedet_loss.png └── 屏幕快照 2019-06-20 下午11.27.42.png ├── backbone ├── deepen │ └── resnet.pdf ├── feature │ └── DenseNet.pdf ├── mobile │ └── MobileNets.pdf └── widen │ ├── GoogLeNet_InceptionV1.pdf │ ├── InceptionV2&V3.pdf │ ├── InceptionV4&Inception-ResNet.pdf │ └── Xception_CVPR_2017.pdf └── detection ├── Survey_Generic Object Detection.pdf ├── one_stage ├── 1-YOLOv1.pdf ├── 10-M2Det.pdf ├── 11-Consistent Optimization for Single-Shot Object Detection.pdf ├── 2-SSD.pdf ├── 3-DSSD.pdf ├── 4-YOLOv2.pdf ├── 5-Focal_arxiv.pdf ├── 5_Focal_Loss_ICCV17.pdf ├── 6-DSOD.pdf ├── 7-YOLOv3.pdf ├── 8-RefineDet.pdf ├── 8-RefineDet_sup.pdf └── 9-RFBNet.pdf └── two_stage ├── 1-RCNN.pdf ├── 10-soft-NMS .pdf ├── 11-Cascade_R-CNN.pdf ├── 12-IoUNet.pdf ├── 13-TridentNet.pdf ├── 2-SPPNet.pdf ├── 3-Fast_R-CNN.pdf ├── 4-Faster R-CNN.pdf ├── 5-OHEM.pdf ├── 6-R-FCN.pdf ├── 7-FPN.pdf ├── 8-dcn.pdf ├── 9-Mask_R-CNN.pdf └── 9-mask r-cnn_arxiv.pdf /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | local_settings.py 57 | db.sqlite3 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # Environments 85 | .env 86 | .venv 87 | env/ 88 | venv/ 89 | ENV/ 90 | env.bak/ 91 | venv.bak/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | # mypy 104 | .mypy_cache/ 105 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 spectre 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## 一、Classic detection model 2 | 3 | ### 1.Proposal or not 4 | 5 | #### 1.1 One-stage 6 | 7 | **OverFeat(ICLR,2014)——>YOLOv1(CVPR,2016)——>SSD(ECCV,2016)——>DSSD(Arxiv,2017)——>YOLOv2(CVPR,2017)——>RetinaNet(ICCV,2017)——>DSOD(ICCV,2017)——>YOLOv3(Arxiv,2018)——>RefineDet(ICCV,2018)——>RFBNet(ECCV,2018)——>M2Det(AAAI,2019)——>Consistent Optimization(12)** 8 | 9 | #### 1.2 Two-stage 10 | 11 | **RCNN(CVPR,2013)——>SPPNet(ECCV,2014)——>Fast RCNN(ICCV,2015)——>Faster RCNN(NIPS,2015)——>OHEM(CVPR,2016)——>R-FCN(NIPS,2016)——>FPN(CVPR,2017)——>DCN(ICCV,2017)——>Mask RCNN(ICCV,2017)——>Soft-NMS(ICCV, 2017)——>Cascade R-CNN(CVPR,2018)——>IoUNet(ECCV 2018)——>TrindentNet(13)** 12 | 13 | #### 1.3 One-Two Combination 14 | 15 | **RefineDet(ICCV,2018)** 16 | 17 | ### 2.Improvement of detection modules 18 | 19 | #### 2.1 based RPN 20 | 21 | [MR-CNN] 22 | 23 | [FPN] 24 | 25 | [CRAFT] 26 | 27 | [R-CNN for Small Object Detection] 28 | 29 | #### 2.2 based ROI 30 | 31 | [RFCN] 32 | 33 | [CoupleNet] 34 | 35 | [Mask R-CNN] 36 | 37 | [Cascade R-CNN] 38 | 39 | #### 2.3 based NMS 40 | 41 | **[Soft-NMS(ICCV,2017)]** 42 | 43 | [Softer-NMS] 44 | 45 | [ConvNMS] 46 | 47 | [Pure NMS Network] 48 | 49 | [Fitness NMS] 50 | 51 | #### 2.4 based anchor 52 | 53 | [GA-RPN(CVPR2019)] 54 | 55 | ### 3.Improvement to solve problems 56 | 57 | #### 3.1 small object 58 | 59 | 1. data-augmentation。简单粗暴有效,正确的做sampling可以很大提升模型在小物体检测上的性能。这里面其实trick也蛮多的,可以参考pyramidbox里面的data-anchor-sampling。 60 | 61 | 针对MaskRCNN做数据增强,人工随机在图像中复制小物体,提高anchor的命中率 62 | 63 | 2. 特征融合方法。最简单粗暴有效的方法,但是速度上影响较大。用high-,low-resolution feature map跨层连接和decoder模块restore小feature。 64 | 65 | [FPN] 66 | [DSSD] 67 | [R-SSD]() 68 | [M2Det] 69 | 70 | 3. 在主干网络的low level(stride较小部分)出feature map,对应的anchor size可以设置较大。 71 | 72 | 4. 利用context信息,建立小物体与context的关系。或者上dilated类似混合感知野,或者在head部分引入SSH相似的模块。 73 | 74 | [R-CNN for Small Object Detection] 75 | 76 | 5. 小物体检测如何把bbox做的更准, 77 | 78 | iou loss、cascade rcnn 79 | 80 | 6. 参考CVPR论文SNIP/SNIPER 81 | 82 | 7. 在anchor层面去设计 83 | 84 | anchor densitification(出自faceboxes论文), 85 | 86 | anchor matching strategy(出自SFD论文)。 87 | 88 | 8. 建模物体间关系,relation network等思路。 89 | 90 | [Relation Network for Object Detection] 91 | 92 | 9. 上GAN啊,在检测器后面对抗一把。 93 | 94 | GAN的主要运用是为了超分辨率的学习应用,将模糊的小目标放大清晰化,而不是为了检测器生成对抗一下。 95 | 96 | 上采样,先做超分辨率再做检测 97 | 98 | 10. 用soft attention去约束confidence相关的feature map,或者做一些pixel wise的attention。 99 | 100 | 参考资料: 101 | 102 | [深度学习在 small object detection 有什么进展?](https://www.zhihu.com/question/272322209) 103 | 104 | [小目标检测问题中“小目标”如何定义?其主要技术难点在哪?有哪些比较好的传统的或深度学习方法?](https://www.zhihu.com/question/269877902) 105 | 106 | 107 | #### 3.2 scale variation/Feature fusion 108 | 109 | [image pyramid/multi-scale testing] 110 | 111 | [feature pyramid] 112 | 113 | [anchor box] 114 | 115 | [M2Det] 116 | 117 | [FSSD] 118 | 119 | #### 3.3 shelter 120 | 121 | [Repulsion Loss] 122 | 123 | [Occlusion-aware R-CNN] 124 | 125 | [Soft-NMS] 126 | 127 | [Bi-box] 128 | 129 | [R-DAD] 130 | 131 | #### 3.4 Imbalance Of Positive&Negative 132 | 133 | [OHEM(CVPR2016)] 134 | 135 | [A-Fast-RCNN(CVPR2017)] 136 | 137 | [Focal loss(ICCV2017)] 138 | 139 | [GHM(AAAI2019)] 140 | 141 | #### 3.5 Mobile or Light Weight 142 | 143 | [Light-Head R-CNN] 144 | 145 | [ThunderNet] 146 | 147 | 148 | ## 二、Classic classification/detection backbone 149 | 150 | ### 1.deepen 151 | 152 | **(1)resnet** 153 | 154 | ### 2.widen 155 | 156 | **(1)Inception** 157 | 158 | ### 3.smaller 159 | 160 | **(1)mobilenet** 161 | 162 | **(2)shufflenet** 163 | 164 | **(3)pelee** 165 | 166 | ### 4.feature 167 | 168 | **(1)DenseNet** 169 | 170 | **(2)SeNet** 171 | 172 | ### 5.detection specific 173 | 174 | **(1)darknet** 175 | 176 | **(2)detnet** 177 | 178 | **(3)Res2Net** 179 | 180 | 181 | ## 三、Detection modules 182 | 183 | ### 1.Selective Search&&RPN 184 | 185 | ### 2.ROI pooling&&ROI align 186 | 187 | ### 3.[IoU]() 188 | 189 | ### 4.NMS 190 | 191 | ### 5.[Generic metrics]() 192 | 193 | ### 6.[mAP]() 194 | 195 | ## 四、经典Paper解读与源码(PyTorch) 196 | 197 | #### 1.SSD 198 | 199 | [SSD](https://zhuanlan.zhihu.com/p/24954433) 200 | 201 | [SSD目标检测](https://zhuanlan.zhihu.com/p/31427288) 202 | 203 | [SSD目标检测笔记](https://zhuanlan.zhihu.com/p/42179282) 204 | 205 | [目标检测|SSD原理与实现](https://zhuanlan.zhihu.com/p/33544892) 206 | 207 | [SSD详解Default box的解读](https://blog.csdn.net/wfei101/article/details/78597442) 208 | 209 | [SSD 源码实现 (PyTorch)](https://hellozhaozheng.github.io/z_post/PyTorch-SSD) 210 | 211 | [ssd算法的pytorch实现与解读](https://www.cnblogs.com/cmai/p/10080005.html) 212 | 213 | [SSD代码解读(三)——MultiboxLoss](https://www.oipapio.com/cn/article-3321055) 214 | 215 | #### 2.RFBNet 216 | 217 | 《Receptive Field Block Net for Accurate and Fast Object Detection》 218 | 219 | [官方代码]() 220 | 221 | [论文笔记]() 222 | 223 | [源码解读](https://zhuanlan.zhihu.com/p/41450062) 224 | 225 | RFB模块+SSD,借鉴Inception结构的空洞卷积。 226 | 227 | #### 3.DetNet 228 | 229 | 《DetNet: A Backbone network for Object Detection》 230 | 231 | ##### 核心点 232 | 233 | 目标检测专门设计的backbone,高层不减小分辨率+空洞卷积+减小网络高层的宽度 234 | 235 | ![detnet_1](assets/detnet_1.png) 236 | 237 | ##### 2.1 Motivation 238 | 239 | (1)分类和检测任务不同,因此用分类数据上训练的分类模型来提取特征用于检测任务不一定合适,比如检测任务比较关注目标的尺度特征,但是分类任务就不一定了。 240 | 241 | (2)检测任务不仅仅要做目标的分类,而且要做目标的定位,这样的差异容易导致一些问题,比如在分类网络中常用的降采样操作可能对分类有效,因为增大了感受野,但是对于需要定位目标的检测任务而言就不一定有利,因为丢失了目标的位置信息。 242 | 243 | ##### 2.2 contribution 244 | 245 | (1)增加网络高层输出特征的分辨率,换句话说就是高层不对特征图做尺寸缩减。 246 | 247 | (2)引入dilated卷积层增加网络高层的感受野,这是因为第一个改进点引起的感受野减小。 248 | 249 | (3)减小网络高层的宽度,减少因增大分辨率带来的计算量。 250 | 251 | ##### 2.3 Method 252 | 253 | 如果网络高层的特征不做像分类网络那样多的降采样(将stride等于32修改为stride等于16)会带来两个问题: 254 | 255 | (1)增加计算量。这个很容易理解,毕竟特征图比之前的大,计算量的增加不可避免。 256 | 257 | (2)高层的感受野(receptive field)减小。感受野和信息丢失类似跷跷板,既然前面选择了尽可能减少高层的特征信息丢失,那么感受野减小也是情理之中。 258 | 259 | 那么怎么解决这两个问题呢? 260 | 261 | (1)针对问题1,主要是降低了网络高层的宽度,这个在下图D中展示得比较清楚了,高层的几个stage的每个block的输入特征通道都是256。而常见的分类算法中,比如ResNet越往高层的stage,特征通道数往往越大。 262 | (2)针对问题2,主要引入dilated卷积层来增大感受野,如下图的A和B所示,通过对比ResNet网络的residual block(下图C)可以看出主要是替换了传统的3*3卷积为dilated卷积层。因此下图中的A和B是DetNet网络中的基础结构(下图D所示)。 263 | 264 | ![detnet_2](assets/detnet_2.png) 265 | 266 | 参考资料:[DetNet 算法笔记](https://blog.csdn.net/u014380165/article/details/81582623) 267 | 268 | #### 4.Cascade R-CNN 269 | 270 | ![Cascade_R-CNN_Net](assets/Cascade_R-CNN_Net.png) 271 | 272 | Github:[Pytorch复现](https://link.zhihu.com/?target=https%3A//github.com/guoruoqian/cascade-rcnn_Pytorch) 273 | 274 | 参考资料: 275 | 276 | [Cascade RCNN算法笔记](https://blog.csdn.net/u014380165/article/details/80602027) 277 | 278 | [CVPR18 Detection文章选介(上)](https://zhuanlan.zhihu.com/p/35882192) 279 | 280 | [目标检测论文阅读:Cascade R-CNN: Delving into High Quality Object Detection](https://zhuanlan.zhihu.com/p/36095768) 281 | 282 | [Cascade R-CNN 详细解读](https://zhuanlan.zhihu.com/p/42553957) 283 | 284 | #### 5.RefineDet 285 | 286 | ##### 核心点 287 | 288 | SSD+RPN+FPN 289 | 290 | (1)引入Two Stage目标检测算法中对Box由粗到细进行回归思想,即先通过RPN网络得到粗粒度的Box信息,然后再通过常规的回归支路进行进一步回归从而得到更加精确的框信息; 291 | 292 | (2)引入类似FPN网络的特征融合操作,可以有效提高对小目标的检测效果,检测网络的框架还是SSD。 293 | 294 | ##### Motivation 295 | 296 | 两阶段目标检测方法相比单阶段方法有以下三个优势: 297 | 298 | (1)两阶段目标检测器采用了两段结构采样来处理类别不均衡的问题 299 | (2)使用了先提取粗粒度Box然后进一步回归,两阶段级联的方式来拟合bbox 300 | (3)采用了两阶段的特征图来描述待检目标 301 | 302 | ##### Method 303 | 304 | ![RefineDet_Net](assets/RefineDet_Net.png) 305 | 306 | 网络结构主要包含ARM、TCB和ODM三部分 307 | 308 | (1)ARM(Anchor Refinement Module) 309 | 310 | 粗筛anchor,剔除掉过于容易的负样本anchors以便为分类器减少搜索空间,降低后续的计算复杂度 311 | 312 | 粗略调整 anchors 的位置和大小,为ODM提供更好的初始值 313 | 314 | (2)TCB(Transfer Connection Block) 315 | 316 | 将ARM部分输出的Feature Map转换成ODM部分的输入。TCB通过进行特征层的融合,将高语义层上采样(通过反卷积实现)与上一层进行融合,提高底特征层的语义信息。不仅可以传递anchor的信息,也是一种做特征金字塔的方式。 317 | 318 | 本文作者使用了反卷积和按位加法来完成了TCB的运算。 319 | 320 | ![RefineDet_TCB](assets/RefineDet_TCB.png) 321 | 322 | (3)ODM(Object Detection Module) 323 | 324 | ODM 旨在根据细化后的 anchors 将结果回归到准确的目标位置并预测多类别标签。不同的地方在于该部分的Anchors是ARM部分得到的Refined Anchors,Feature Map来自TCB得到的融合了各层的多语义Feature Map(可大幅度提高小目标物体的检测效果)。 325 | 326 | (4)two-step cascaded regression 327 | 328 | 作者认为目前的单阶段目标检测器只进行了一次的目标框回归,这可能是导致在一些困难任务上表现不佳的原因 329 | 330 | 所以,不同于SSD,RefineDet采用了两步的回归策略,首先由ARM生成大致的目标框,再由ODM在次基础上进一步精修目标框边界,作者认为这种方法会提升模型整体的精度,尤其是改进对小目标检测的表现 331 | 332 | (5)negative anchor filtering 333 | 334 | 负样本筛选,本文的思路是ARM将负样本置信度大于门限值 θ 的目标框筛去,θ 的经验值是0.99。也就是说ARM仅将正样本和困难的负样本送进ODM进行进一步检测 335 | 336 | 困难负样本挖掘采用了与SSD一致的方法,将负:正样本比例保持在3:1 337 | 338 | (6)损失函数 339 | 340 | RefineDet的损失函数由两部分组成,ACM和ODM,每一部分都包含分类与回归两个损失函数,所以总得损失函数为: 341 | 342 | 其中i是mini-batch中anchor的index, 343 | 344 | $l_i^*$ 是anchor i ground truth class label. 345 | 346 | $g_i^*$ 是i anchor 的ground truth location 和 size. 347 | 348 | $p_i$和$x_i$ 是anchor i 的predicted confidence和ARM的refined coordinates. 349 | 350 | $c_i$和 $t_i$是 ODM 的predicted object class 和 coordinates of the bounding box. 351 | 352 | $N_{arm}$和$N_{odm}$是ARM和ODM的positive anchor的数目 353 | 354 | [l>1]是Iverson bracket indicator function,如果括号里面成立输出1,否则输出零。 355 | 356 | ![refinedet_loss](assets/refinedet_loss.png) 357 | 358 | 359 | 360 | 参考资料 361 | 362 | [[读论文] Single-Shot Refinement Neural Network for Object Detection](https://zhuanlan.zhihu.com/p/37873666) 363 | 364 | [http://www.baiyifan.cn/2019/03/10/RefineDet/](http://www.baiyifan.cn/2019/03/10/RefineDet/) 365 | 366 | [https://hellozhaozheng.github.io/z_post/%E8%AE%A1%E7%AE%97%E6%9C%BA%E8%A7%86%E8%A7%89-RefineDet-CVPR2018/](https://hellozhaozheng.github.io/z_post/计算机视觉-RefineDet-CVPR2018/) 367 | 368 | [RefineDet算法笔记](https://blog.csdn.net/u014380165/article/details/79502308) 369 | 370 | #### 6. Consistent Optimization 371 | 372 | 《Consistent Optimization for Single-Shot Object Detection》 373 | 374 | ##### Motivation 375 | 376 | 单阶段目标检测主要有两个不足,一个是前景-背景类别不平衡,由focal loss处理;另一个是训练目标和推理设置的不一致,本文通过利用训练时refined anchors来解决后者。 377 | 378 | 不一致具体是:分类的训练目标是对default,regular anchor进行分类,而预测的概率分配给由定位分支产生的对应回归后anchor。 379 | 380 | 当原始anchor和refined anchor具有相同的groundtruth目标时,这种训练推理配置可以很好地工作。但是在下面两种情况则不然。 381 | 382 | ![consistent_optimization_misalignment](assets/consistent_optimization_misalignment.png) 383 | 384 | (1)当两个物体相互遮挡时,比如上图,两个anchor都匹配bike,所以检测器把这两个anchor的类别都判定为bike,进行分类和回归后,黄色框回归到类别person,但是其判定的类别是bike,这样不一致可能导致NMS时定位准确的anchor(红框)被错误的框(黄框)抑制。 385 | 386 | ![consistent_optimization_in_out_iou](assets/consistent_optimization_in_out_iou.png) 387 | 388 | (2)上图可以发现,回归后输出的IoU一般比输入的IoU要大,所以一些anchor被判定为负样本,但是如果经过回归,有可能是正样本. 389 | 390 | 所以在训练阶段使用回归后的anchor可以此gap。 391 | 392 | ##### Consistent Optimization 393 | 394 | ###### Consistent Detection 395 | 396 | ![Consistent Detection](assets/Consistent Detection.png) 397 | 398 | ###### Consistent Localization 399 | 400 | ![Consistent Localization](assets/Consistent Localization.png) 401 | 402 | ##### Comparison to Prior Works 403 | 404 | ![consistent_optimization_net](assets/consistent_optimization_net.png) 405 | 406 | ##### Experiments 407 | 408 | ###### 对比实验 409 | 410 | ![consistent_optimization_exp1](assets/consistent_optimization_exp1.png) 411 | 412 | ###### 正负样本超参数 413 | 414 | ![屏幕快照 2019-06-20 下午11.27.42](assets/屏幕快照 2019-06-20 下午11.27.42.png) 415 | 416 | ###### 不同数量分类/回归 417 | 418 | ![consistent_optimization_cls_reg](assets/consistent_optimization_cls_reg.png) 419 | 420 | ###### 泛化能力 421 | 422 | **不同网络深度和输入图片尺度** 423 | 424 | ![consistent_optimization_deep_scale](assets/consistent_optimization_deep_scale.png) 425 | 426 | **SSD** 427 | 428 | ![consistent_optimization_ssd](assets/consistent_optimization_ssd.png) 429 | 430 | ##### 与SOTA比较 431 | 432 | ![consistent_optimization_SOTA](assets/consistent_optimization_SOTA.png) 433 | 434 | #### 7.Focal Loss 435 | 436 | 《Focal Loss for Dense Object Detection》 437 | 438 | RetinaNet:ResNet+FPN+Focal Loss 439 | 440 | ##### Motivation 441 | 442 | one-stage不如two-stage的主要原因在于正负样本的极度不平衡,一张图像可能生成成千上万的candidate locations,但是其中只有很少一部分是包含object的,这就带来了类别不均衡。 443 | 444 | 而这种不平衡会导致两个问题: 445 | 446 | (1)训练低效,因为大部分位置都是简单负样本,贡献无用的学习信号; 447 | 448 | (2)整体而言,简单负样本会在训练中占据压倒性优势,导致模型退化。 449 | 450 | OHEM(online hard example mining):each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples。 451 | 452 | #### 8.Light-Weight RetinaNet 453 | 454 | 减少FLOPs的两种常用思路: 455 | 456 | (1)更换小backbone 457 | 458 | (2)减小输入图片尺寸,会指数下降准确率 459 | 460 | 只减少计算密集层中的FLOPs,而保持其他层不变。可以接近线性。 461 | 462 | 463 | 464 | ## 五、Reference 465 | 466 | [1]**(YOLOv1)** J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In CVPR, 2016. 467 | 468 | [2]**(SSD)** W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In ECCV, 2016. 469 | 470 | [3]**(DSSD)** C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. DSSD:Deconvolutional single shot detector. In arXiv,2017. 471 | 472 | [4]**(YOLOv2)** J. Redmon and A. Farhadi. YOLO9000: Better, faster, stronger. In CVPR, 2017. 473 | 474 | [5]**(RetinaNet)** T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar. Focal loss for dense object detection. In ICCV, 2017. 475 | 476 | [6]**(DSOD)** Shen Z., Liu Z., Li J., Jiang Y., Chen Y., Xue X. DSOD: Learning deeply supervised object detectors from scratch. In ICCV, 2017 477 | 478 | [7] **(YOLOv3)** J. Redmon and A. Farhadi. YOLOv3: An incremental im- provement. In arXiv, 2018. 479 | 480 | [8]**(RefineDet)** S. Zhang, L. Wen, X. Bian, Z. Lei, and S. Z. Li. Single-shot refinement neural network for object detection. In CVPR, 2018. 481 | 482 | [9]**(RFBNet)** Songtao Liu, Di Huang⋆, and Yunhong Wang. Receptive Field Block Net for Accurate and Fast Object Detection. In ECCV ,2018. 483 | 484 | [10]**(M2Det)** Qijie Zhao, Tao Sheng, Yongtao Wang, Zhi Tang, Ying Chen, Ling Cai and Haibin Ling. M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network. In AAAI,2019. 485 | 486 | [11]**(Consistent Optimization)** Tao Kong,Fuchun Sun,Huaping Liu,Yuning Jiang and Jianbo Shi. Consistent Optimization for Single-Shot Object Detection. In arXiv, 2019. 487 | 488 | [12]**(R-CNN)** R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 489 | 490 | [13]**(SppNet)** K.He,X.Zhang,S.Ren,andJ.Sun.Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV,2014. 491 | 492 | [14]**(Fast R-CNN)** R. Girshick. Fast R-CNN. In ICCV, 2015. 493 | 494 | [15]**(Faster R-CNN)** S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal net-works. In NIPS, 2015. 495 | 496 | [16]**(OHEM)** Abhinav Shrivastava,Abhinav Gupta and Ross Girshick. Training Region-based Object Detectors with Online Hard Example Mining.In CVPR, 2016. 497 | 498 | [17] **(R-FCN)** J.Dai,Y.Li,K.He,andJ.Sun.R-FCN:Object detection via region-based fully convolutional networks. In NIPS, 2016. 499 | 500 | [18]**(FPN)** T.-Y. Lin, P. Dolla ́r, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie. Feature pyramid networks for object detection. In CVPR, 2017. 501 | 502 | [19]**(DCN)** J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In ICCV, 2017. 503 | 504 | [20]**(Mask R-CNN)** K.He,G.Gkioxari,P.Dolla ́r,and R.Girshick.MaskR-CNN. In ICCV, 2017. 505 | 506 | [21]**(Soft- NMS)** N. Bodla, B. Singh, R. Chellappa, and L. S. Davis. Soft-NMS-improving object detection with one line of code. In ICCV, 2017. 507 | 508 | [22]**(Cascade R-CNN)** Z. Cai and N. Vasconcelos. Cascade R-CNN: Delving into high quality object detection. In CVPR, 2018. 509 | 510 | [23]**(IoUNet)** Borui Jiang,Ruixuan Luo,Jiayuan Mao,Tete Xiao,and Yuning Jiang.Acquisition of Localization Confidence for Accurate Object Detection.In ECCV 2018. 511 | 512 | [24]**(TridentNet)** Yanghao Li,Yuntao Chen,Naiyan Wang,Zhaoxiang Zhang.Scale-Aware Trident Networks for Object Detection.In arXiv,2019. 513 | 514 | [25]**(ResNet)** K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 515 | 516 | [26]**(DenseNet)** Gao Huang,Zhuang Liu,Laurens van der Maaten.Densely Connected Convolutional Networks. In CVPR,2017. 517 | -------------------------------------------------------------------------------- /assets/Cascade_R-CNN_Net.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/Cascade_R-CNN_Net.png -------------------------------------------------------------------------------- /assets/Consistent Detection.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/Consistent Detection.png -------------------------------------------------------------------------------- /assets/Consistent Localization.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/Consistent Localization.png -------------------------------------------------------------------------------- /assets/RefineDet_Net.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/RefineDet_Net.png -------------------------------------------------------------------------------- /assets/RefineDet_TCB.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/RefineDet_TCB.png -------------------------------------------------------------------------------- /assets/consistent_optimization_SOTA.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_SOTA.png -------------------------------------------------------------------------------- /assets/consistent_optimization_cls_reg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_cls_reg.png -------------------------------------------------------------------------------- /assets/consistent_optimization_deep_scale.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_deep_scale.png -------------------------------------------------------------------------------- /assets/consistent_optimization_exp1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_exp1.png -------------------------------------------------------------------------------- /assets/consistent_optimization_in_out_iou.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_in_out_iou.png -------------------------------------------------------------------------------- /assets/consistent_optimization_misalignment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_misalignment.png -------------------------------------------------------------------------------- /assets/consistent_optimization_net.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_net.png -------------------------------------------------------------------------------- /assets/consistent_optimization_ssd.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/consistent_optimization_ssd.png -------------------------------------------------------------------------------- /assets/detnet_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/detnet_1.png -------------------------------------------------------------------------------- /assets/detnet_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/detnet_2.png -------------------------------------------------------------------------------- /assets/refinedet_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/refinedet_loss.png -------------------------------------------------------------------------------- /assets/屏幕快照 2019-06-20 下午11.27.42.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/assets/屏幕快照 2019-06-20 下午11.27.42.png -------------------------------------------------------------------------------- /backbone/deepen/resnet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/backbone/deepen/resnet.pdf -------------------------------------------------------------------------------- /backbone/feature/DenseNet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/backbone/feature/DenseNet.pdf -------------------------------------------------------------------------------- /backbone/mobile/MobileNets.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/backbone/mobile/MobileNets.pdf -------------------------------------------------------------------------------- /backbone/widen/GoogLeNet_InceptionV1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/backbone/widen/GoogLeNet_InceptionV1.pdf -------------------------------------------------------------------------------- /backbone/widen/InceptionV2&V3.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/backbone/widen/InceptionV2&V3.pdf -------------------------------------------------------------------------------- /backbone/widen/InceptionV4&Inception-ResNet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/backbone/widen/InceptionV4&Inception-ResNet.pdf -------------------------------------------------------------------------------- /backbone/widen/Xception_CVPR_2017.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/backbone/widen/Xception_CVPR_2017.pdf -------------------------------------------------------------------------------- /detection/Survey_Generic Object Detection.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/Survey_Generic Object Detection.pdf -------------------------------------------------------------------------------- /detection/one_stage/1-YOLOv1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/1-YOLOv1.pdf -------------------------------------------------------------------------------- /detection/one_stage/10-M2Det.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/10-M2Det.pdf -------------------------------------------------------------------------------- /detection/one_stage/11-Consistent Optimization for Single-Shot Object Detection.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/11-Consistent Optimization for Single-Shot Object Detection.pdf -------------------------------------------------------------------------------- /detection/one_stage/2-SSD.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/2-SSD.pdf -------------------------------------------------------------------------------- /detection/one_stage/3-DSSD.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/3-DSSD.pdf -------------------------------------------------------------------------------- /detection/one_stage/4-YOLOv2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/4-YOLOv2.pdf -------------------------------------------------------------------------------- /detection/one_stage/5-Focal_arxiv.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/5-Focal_arxiv.pdf -------------------------------------------------------------------------------- /detection/one_stage/5_Focal_Loss_ICCV17.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/5_Focal_Loss_ICCV17.pdf -------------------------------------------------------------------------------- /detection/one_stage/6-DSOD.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/6-DSOD.pdf -------------------------------------------------------------------------------- /detection/one_stage/7-YOLOv3.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/7-YOLOv3.pdf -------------------------------------------------------------------------------- /detection/one_stage/8-RefineDet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/8-RefineDet.pdf -------------------------------------------------------------------------------- /detection/one_stage/8-RefineDet_sup.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/8-RefineDet_sup.pdf -------------------------------------------------------------------------------- /detection/one_stage/9-RFBNet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/one_stage/9-RFBNet.pdf -------------------------------------------------------------------------------- /detection/two_stage/1-RCNN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/1-RCNN.pdf -------------------------------------------------------------------------------- /detection/two_stage/10-soft-NMS .pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/10-soft-NMS .pdf -------------------------------------------------------------------------------- /detection/two_stage/11-Cascade_R-CNN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/11-Cascade_R-CNN.pdf -------------------------------------------------------------------------------- /detection/two_stage/12-IoUNet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/12-IoUNet.pdf -------------------------------------------------------------------------------- /detection/two_stage/13-TridentNet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/13-TridentNet.pdf -------------------------------------------------------------------------------- /detection/two_stage/2-SPPNet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/2-SPPNet.pdf -------------------------------------------------------------------------------- /detection/two_stage/3-Fast_R-CNN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/3-Fast_R-CNN.pdf -------------------------------------------------------------------------------- /detection/two_stage/4-Faster R-CNN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/4-Faster R-CNN.pdf -------------------------------------------------------------------------------- /detection/two_stage/5-OHEM.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/5-OHEM.pdf -------------------------------------------------------------------------------- /detection/two_stage/6-R-FCN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/6-R-FCN.pdf -------------------------------------------------------------------------------- /detection/two_stage/7-FPN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/7-FPN.pdf -------------------------------------------------------------------------------- /detection/two_stage/8-dcn.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/8-dcn.pdf -------------------------------------------------------------------------------- /detection/two_stage/9-Mask_R-CNN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/9-Mask_R-CNN.pdf -------------------------------------------------------------------------------- /detection/two_stage/9-mask r-cnn_arxiv.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/espectre/Object_Detection/d0ee3237dc2df68f9e6c97a62c03e94f78051dbf/detection/two_stage/9-mask r-cnn_arxiv.pdf --------------------------------------------------------------------------------