├── LICENSE ├── MLT_PwA.gif ├── README.md ├── ai-and-cognitive-science ├── DL needs a prefrontal cortex.pdf ├── README.md └── images │ ├── PFC.png │ └── dl_needs_a_pfc_title.png ├── convolutional-neural-networks ├── Estimating-Example-Difficulty-using-VOG.pdf ├── README.md ├── Selective-Kernel-Networks.pdf ├── Squeeze-and-Excitation_Networks.pdf └── images │ ├── sknets.png │ ├── squeeze-and-excitation-networks.png │ └── vog.png ├── miscellaneous ├── Dataset-Augmentation-in-Feature-Space.pdf ├── README.md └── images │ └── Dataset-Augmentation-in-Feature-Space.png ├── object-detection ├── README.md ├── RetinaNet.pdf ├── YOLOv1.pdf ├── images │ ├── RetinaNet-architecture.png │ ├── YOLOv1.png │ └── m2det-architecture.png └── m2det.pdf └── unsupervised-and-semi-supervised-learning ├── Convolutional-clustering-for-unsupervised-learning.pdf ├── README.md └── images └── convolutional-k-means-clustering.png /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Machine Learning Tokyo 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /MLT_PwA.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/MLT_PwA.gif -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Papers with Annotations (PwA) 2 | 3 |

4 | 5 | 6 | 7 | (created by Alisher Abdulkhaev. [Twitter](https://twitter.com/alisher_ai) | [LinkedIn](https://www.linkedin.com/in/alisher-abdulkhaev/) | [GitHub](https://github.com/alisher0717) | alisher@mltokyo.ai) 8 | 9 | This project compiles multiple (AI-related) papers with illustrations, annotations, and brief explanations for technical keywords, terms and previous studies which makes them easier to read and to get the main idea intuitively. 10 | 11 | - [object detection PwA](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/tree/master/object-detection) 12 | - [ai and cognitive science related PwA](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/tree/master/ai-and-cognitive-science) 13 | - [unsupervised and semi/self-supervised learning related PwA](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/unsupervised-and-semi-supervised-learning) 14 | - [convolutional neural networks related PwA](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/convolutional-neural-networks/README.md) 15 | - [miscellaneous PwA](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/miscellaneous/README.md) 16 | 17 | Please feel free to open new PR (pull request) if you have such kind of annotated research papers (related to AI, ML, DL, neuroscience, cognitive sciences). 18 | 19 | 20 | Also, please give us a feedback by opening an issue on this repository. We are looking forward for your collaboration! 21 | 22 | --- 23 |
24 | 25 | Annotation tools: 26 | 27 | 28 | ## How are the annotations generated: 29 | 📌 **Software:** Notability App — import pdf to the App, make annotations (handwriten notes, import the figures, stickers, etc) and export the pdf. 30 | 31 | 📌 **Hardware:** iPad (6th generation) with Apple Pencil (1st generation). However, any iPad (which supports Apple pencil) or any Android tablets should be fine. 32 | 33 |
34 | 35 |
36 | 37 | Featured at: 38 | 39 | 40 | 📌 [MLT's Blog](https://machinelearningtokyo.com/2020/06/25/papers-with-annotations/) 41 | 42 | 📌 [David Ha's tweet](https://twitter.com/hardmaru/status/1275690178699542529?s=20) 43 | 44 | 📌 [www.analyticsvidhya.com](https://www.analyticsvidhya.com/blog/2020/07/7-open-source-data-science-projects-add-resume/?unapproved=162210&moderation-hash=6e766ca8354bb4f681ca290eb6a65647#comment-162210) 45 |
46 | 47 |
48 | 49 | Contributors: 50 | 51 | 52 | - Alisher Abdulkhaev 53 | - Jayson Cunanan 54 |
55 | 56 | 57 | -------------------------------------------------------------------------------- /ai-and-cognitive-science/DL needs a prefrontal cortex.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/ai-and-cognitive-science/DL needs a prefrontal cortex.pdf -------------------------------------------------------------------------------- /ai-and-cognitive-science/README.md: -------------------------------------------------------------------------------- 1 | # AI and cognitive science related papers with annotations 2 | 3 | --- 4 | 5 | 6 | ## Deep Learning needs a prefrontal cortex: annotation (ICLR Workshop 2020) 7 | 8 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/ai-and-cognitive-science/DL%20needs%20a%20prefrontal%20cortex.pdf) 9 | 10 | 📌 [Original paper](https://baicsworkshop.github.io/pdf/BAICS_10.pdf) | **Authors: Jacob Russin, Randall C. O’Reilly, Yoshua Bengio** 11 | 12 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/ai-and-cognitive-science/DL%20needs%20a%20prefrontal%20cortex.pdf) 13 | 14 | --- 15 | 16 | -------------------------------------------------------------------------------- /ai-and-cognitive-science/images/PFC.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/ai-and-cognitive-science/images/PFC.png -------------------------------------------------------------------------------- /ai-and-cognitive-science/images/dl_needs_a_pfc_title.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/ai-and-cognitive-science/images/dl_needs_a_pfc_title.png -------------------------------------------------------------------------------- /convolutional-neural-networks/Estimating-Example-Difficulty-using-VOG.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/convolutional-neural-networks/Estimating-Example-Difficulty-using-VOG.pdf -------------------------------------------------------------------------------- /convolutional-neural-networks/README.md: -------------------------------------------------------------------------------- 1 | # Squeeze-and-Excitation Networks (CVPR 2018) 2 | 3 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/convolutional-neural-networks/Squeeze-and-Excitation_Networks.pdf) 4 | 5 | 📌 [Original paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.pdf) | **Authors: Jie Hu, Li Shen and 6 | Gang Sun** 7 | 8 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/convolutional-neural-networks/Squeeze-and-Excitation_Networks.pdf) 9 | 10 | --- 11 | 12 | # Estimating Example Difficulty using Variance of Gradients (ICML, WHI-2020 Workshop) 13 | 14 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/convolutional-neural-networks/Estimating-Example-Difficulty-using-VOG.pdf) 15 | 16 | 17 | 18 | 📌 [Original paper](https://arxiv.org/pdf/2008.11600.pdf) | **Authors: Chirag Agarwal and 19 | Sara Hooker** 20 | 21 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/convolutional-neural-networks/Estimating-Example-Difficulty-using-VOG.pdf) 22 | 23 | 📌 **Related papers:** [Continual Deep Learning by Functional Regularisation of Memorable Past](https://arxiv.org/pdf/2004.14070.pdf) | [Nagging Predictors](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3627163) 24 | 25 | 26 | --- 27 | 28 | # Selective Kernel Networks (CVPR 2019) 29 | 30 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/convolutional-neural-networks/Selective-Kernel-Networks.pdf) 31 | 32 | 33 | 34 | 📌 [Original paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Selective_Kernel_Networks_CVPR_2019_paper.pdf) | **Authors: Xiang Li, Wenhai Wang, Xiaolin Hu and Jian Yang** 35 | 36 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/convolutional-neural-networks/Selective-Kernel-Networks.pdf) 37 | 38 | 📌 **Implementations:** [Caffe (official)](https://github.com/implus/SKNet) | [PyTorch](https://github.com/pppLang/SKNet) 39 | 40 | -------------------------------------------------------------------------------- /convolutional-neural-networks/Selective-Kernel-Networks.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/convolutional-neural-networks/Selective-Kernel-Networks.pdf -------------------------------------------------------------------------------- /convolutional-neural-networks/Squeeze-and-Excitation_Networks.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/convolutional-neural-networks/Squeeze-and-Excitation_Networks.pdf -------------------------------------------------------------------------------- /convolutional-neural-networks/images/sknets.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/convolutional-neural-networks/images/sknets.png -------------------------------------------------------------------------------- /convolutional-neural-networks/images/squeeze-and-excitation-networks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/convolutional-neural-networks/images/squeeze-and-excitation-networks.png -------------------------------------------------------------------------------- /convolutional-neural-networks/images/vog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/convolutional-neural-networks/images/vog.png -------------------------------------------------------------------------------- /miscellaneous/Dataset-Augmentation-in-Feature-Space.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/miscellaneous/Dataset-Augmentation-in-Feature-Space.pdf -------------------------------------------------------------------------------- /miscellaneous/README.md: -------------------------------------------------------------------------------- 1 | # Miscellaneous topics 2 | 3 | --- 4 | 5 | ## Dataset Augmentation in Feature Space 6 | 7 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/miscellaneous/Dataset-Augmentation-in-Feature-Space.pdf) 8 | 9 | 10 | 11 | 📌 [Original paper](https://openreview.net/pdf?id=HJ9rLLcxg) | **Authors: Terrance DeVries, Graham W. Taylor** 12 | 13 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/miscellaneous/Dataset-Augmentation-in-Feature-Space.pdf) 14 | 15 | -------------------------------------------------------------------------------- /miscellaneous/images/Dataset-Augmentation-in-Feature-Space.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/miscellaneous/images/Dataset-Augmentation-in-Feature-Space.png -------------------------------------------------------------------------------- /object-detection/README.md: -------------------------------------------------------------------------------- 1 | ## M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network 2 | 3 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/object-detection/m2det.pdf) 4 | 5 | 6 | 📌 [Original paper](https://arxiv.org/pdf/1811.04533.pdf) | **Authors: Qijie Zhao, Tao Sheng,Yongtao Wang, Zhi Tang, Ying Chen, Ling Cai and Haibin Ling** 7 | 8 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/object-detection/m2det.pdf) 9 | 10 | 📌 **Implementations:** [M2DET pytorch](https://github.com/qijiezhao/M2Det) 11 | 12 | --- 13 | ## RetinaNet: Focal Loss for Dense Object Detection (ICCV 2017) 14 | 15 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/object-detection/RetinaNet.pdf) 16 | 17 | 18 | 19 | 📌 [Original paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Lin_Focal_Loss_for_ICCV_2017_paper.pdf) | **Authors: Tsung-Yi Lin Priya Goyal Ross Girshick Kaiming He Piotr Dolla ́r** 20 | 21 | 📌 [Paper with annotation](https://github.com/alisher0717/machine-learning-notes/blob/master/object-detection-papers/RetinaNet.pdf) 22 | 23 | 📌 **Implementations:** [Detectron](https://github.com/facebookresearch/Detectron) | [Keras-retinanet](https://github.com/fizyr/keras-retinanet) 24 | 25 | --- 26 | 27 | ## YOLOv1: You Only Look Once: Unified, Real-Time Object Detection 28 | 29 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/object-detection/YOLOv1.pdf) 30 | 31 | 32 | 33 | 📌 [Original paper](https://arxiv.org/abs/1506.02640) | **Authors: Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi** | [Darknet](http://pjreddie.com/yolo/) 34 | 35 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/object-detection/YOLOv1.pdf) 36 | -------------------------------------------------------------------------------- /object-detection/RetinaNet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/object-detection/RetinaNet.pdf -------------------------------------------------------------------------------- /object-detection/YOLOv1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/object-detection/YOLOv1.pdf -------------------------------------------------------------------------------- /object-detection/images/RetinaNet-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/object-detection/images/RetinaNet-architecture.png -------------------------------------------------------------------------------- /object-detection/images/YOLOv1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/object-detection/images/YOLOv1.png -------------------------------------------------------------------------------- /object-detection/images/m2det-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/object-detection/images/m2det-architecture.png -------------------------------------------------------------------------------- /object-detection/m2det.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/object-detection/m2det.pdf -------------------------------------------------------------------------------- /unsupervised-and-semi-supervised-learning/Convolutional-clustering-for-unsupervised-learning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/unsupervised-and-semi-supervised-learning/Convolutional-clustering-for-unsupervised-learning.pdf -------------------------------------------------------------------------------- /unsupervised-and-semi-supervised-learning/README.md: -------------------------------------------------------------------------------- 1 | # Unsupervised, semi-supervised and self-supervised trainings 2 | 3 | --- 4 | ## Convolutional Clustering for Unsupervised Learning @ ICML 2016 5 | 6 | [

](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/unsupervised-and-semi-supervised-learning/Convolutional-clustering-for-unsupervised-learning.pdf) 7 | 8 | 9 | 10 | 📌 [Original paper](https://arxiv.org/abs/1511.06241) | **Authors: Aysegul Dundar, Jonghoon Jin, Eugenio Culurciello** 11 | 12 | 📌 [Paper with annotation](https://github.com/Machine-Learning-Tokyo/papers-with-annotations/blob/master/unsupervised-and-semi-supervised-learning/Convolutional-clustering-for-unsupervised-learning.pdf) 13 | 14 | --- 15 | 16 | -------------------------------------------------------------------------------- /unsupervised-and-semi-supervised-learning/images/convolutional-k-means-clustering.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/papers-with-annotations/d0b9f6731edeb922a0303468a9d9d74efcb6e71e/unsupervised-and-semi-supervised-learning/images/convolutional-k-means-clustering.png --------------------------------------------------------------------------------