├── _config.yml ├── .github ├── workflows │ ├── auto_request_review.yml │ └── comment.yml └── reviewers.yml ├── localization └── README.md ├── natural_language_processing └── README.md ├── math └── README.md ├── README.md ├── literature_study_tips └── Readme.md ├── dynamics_controls └── README.md ├── CONTRIBUTING.md ├── reinforcement_learning └── README.md ├── LICENSE.md └── deep_learning └── README.md /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-slate 2 | -------------------------------------------------------------------------------- /.github/workflows/auto_request_review.yml: -------------------------------------------------------------------------------- 1 | name: Auto Request Review 2 | 3 | on: 4 | pull_request: 5 | types: [opened, reopened] 6 | 7 | jobs: 8 | auto-request-review: 9 | name: Auto Request Review 10 | runs-on: ubuntu-latest 11 | permissions: 12 | pull-requests: write 13 | steps: 14 | - name: Request review based on files changes and/or groups the author belongs to 15 | uses: necojackarc/auto-request-review@v0.7.0 16 | with: 17 | token: ${{ secrets.GITHUB_TOKEN }} 18 | config: .github/reviewers.yml # Config file location override -------------------------------------------------------------------------------- /.github/workflows/comment.yml: -------------------------------------------------------------------------------- 1 | on: 2 | issues: 3 | types: [opened] 4 | pull_request_target: 5 | types: [opened] 6 | pull_request: 7 | types: [opened] 8 | 9 | jobs: 10 | welcome: 11 | runs-on: ubuntu-latest 12 | steps: 13 | - uses: EddieHubCommunity/gh-action-community/src/welcome@main 14 | with: 15 | github-token: ${{ secrets.GITHUB_TOKEN }} 16 | issue-message: "Hey! Thanks for creating this issue. Please wait while the people of IvLabs review your issue. Incase there is no response for one week then please add a comment in the same issue." 17 | pr-message: "Hey! Thanks for your contribution. Please wait while the people of IvLabs review your PR. Incase there is no response for one week then please add a comment in the same PR." 18 | footer: "Keep Learning! :hatching_chick:" 19 | -------------------------------------------------------------------------------- /.github/reviewers.yml: -------------------------------------------------------------------------------- 1 | # https://github.com/marketplace/actions/auto-request-review 2 | reviewers: 3 | # The default reviewers 4 | defaults: 5 | - default-reviewers 6 | 7 | groups: 8 | default-reviewers: 9 | - team:repo-maintainer 10 | controls: 11 | - aditya-shirwatkar 12 | - Kush0301 13 | - Nachiket497 14 | - RiVer2000 15 | nlp: 16 | - rishika2110 17 | - GlazeDonuts 18 | - aayush-fadia 19 | - aneesh-shetye 20 | - Diksha942 21 | - Kshitij-Ambilduke 22 | localization: 23 | - Kush0301 24 | - prakrutk 25 | rl: 26 | - RajGhugare19 27 | - M-NEXT 28 | dl: 29 | # - GlazeDonuts 30 | # - rishika2110 31 | # - aayush-fadia 32 | # - take2rohit 33 | - Kshitij-Ambilduke 34 | - sibam23 35 | - vignesh-creator 36 | math: 37 | - GlazeDonuts 38 | 39 | 40 | 41 | files: 42 | # Keys are glob expressions. 43 | # You can assign groups defined above as well as GitHub usernames. 44 | '**': 45 | - default-reviewers # group 46 | 'dynamics_controls/**': 47 | - controls # group 48 | 'deep_learning/**': 49 | - dl # group 50 | 'natural_language_processing/**': 51 | - nlp # group 52 | 'reinforcement_learning/**': 53 | - rl 54 | 'localization/**': 55 | - localization 56 | 'math/**': 57 | - math 58 | '.github/**': 59 | - ABD-01 # username 60 | 61 | options: 62 | ignore_draft: true 63 | ignored_keywords: 64 | - DO NOT REVIEW 65 | enable_group_assignment: false 66 | number_of_reviewers: 5 -------------------------------------------------------------------------------- /localization/README.md: -------------------------------------------------------------------------------- 1 | ## SLAM 2 | 3 | | Paper | Notes | Author | Summary | 4 | |:-----------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------:|:-----------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------:| 5 | | [Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age](https://ieeexplore.ieee.org/document/7747236) | [HackMD](https://hackmd.io/@AniketGujarathi/HyXSId3yv) | [Aniket Gujarathi](https://www.linkedin.com/in/aniket-gujarathi/) | This paper explains in detail the various approaches in SLAM in the past and what are the open problems in the field that can be tackled by future researchers. | 6 | | [Seg-Map: Segment-based mapping and localization using data-driven descriptors](https://arxiv.org/pdf/1909.12837.pdf) | [HackMD](https://hackmd.io/@AniketGujarathi/BkmdjaWyw) | [Aniket Gujarathi](https://www.linkedin.com/in/aniket-gujarathi/) | This paper explains a solution for localization and mapping based on extraction of segments in 3D point clouds. | 7 | -------------------------------------------------------------------------------- /natural_language_processing/README.md: -------------------------------------------------------------------------------- 1 | ## Natural Language Processing 2 | 3 | | Paper | Notes | Author(s) | Summary | 4 | |:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------:|:----------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------:| 5 | | [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) | [HackMD](https://hackmd.io/@photon-dodo/HyrN0wjkv) | [Rishika](https://https://github.com/rishika2110) [Khurshed](https://https://github.com/GlazeDonuts) | This paper presents a novel architecture and paradigm for Neural Machine Translation. | 6 | | [Neural Machine Translation by jointly learning to align and translate]( https://arxiv.org/abs/1409.0473) | [HackMD]( https://hackmd.io/@photon-dodo/HJfefAQbP) | [Rishika](https://https://github.com/rishika2110) [Khurshed](https://https://github.com/GlazeDonuts) | This paper introduces a novel attention based approach for translation. | 7 | | [Answer Them All! Toward Universal Visual Question Answering Models](https://arxiv.org/abs/1903.00366) | [Notion](https://phrygian-macaroni-e3b.notion.site/Answer-Them-All-RAMEN-d441cd8797984474baeba5ce4176956d) | [Aneesh](https://sites.google.com/view/aneesh-shetye/home) | This paper tries to resolve the disparity in performance of previous Visual Question Answering (VQA) architectures on synthetic and natural datasets by introducing a novel architecture. | 8 | -------------------------------------------------------------------------------- /math/README.md: -------------------------------------------------------------------------------- 1 | ## Math Concepts 2 | 3 | | Paper | Notes | Author | Summary | 4 | |:--------------------------------------------:|:------------------------------------------------------------------:|:---------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 5 | | Convexity and Convergence in Gradient Descent | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/ByIMT8Zg_) | [Sharath](https://sharathraparthy.github.io/) | In this, we talk about strong and smooth convex functions and their convergence rates in case of gradient descent | 6 | | Notes on Stability of Dynamical Systems | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/rkq_V2Cyd) | [Sharath](https://sharathraparthy.github.io/) | In this notes, we discuss discrete and continuous dynamical systems and their stability properties | 7 | | Policy Gradient Theorem | [HackMD](https://hackmd.io/@Raj-Ghugare/rygKPUD08) | [Raj](https://github.com/RajGhugare19) | Derivation and explanation for the policy gradient theorem in Reinforcement Learning | 8 | | Reproducing Kernel Hilbert Spaces | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/rkTjKdRMS) | [Sharath](https://sharathraparthy.github.io/) | These notes review some of the fundamental concepts of linear algebra like vector spaces, inner product spaces, etc., and then introduces the basic concepts of RKHS. | 9 | | Guaranteed computation of robot trajectories | [HackMD](https://hackmd.io/@kZ5m8OgNSouLVUfdO4Vu3w/r1CrveDuI/edit) | [Uddesh](https://github.com/uddeshtople) | A contractor-based approach is proposed for guaranteed integration of state equations. The framework is based on the use of tubes as envelopes of feasible trajectories. | 10 | 11 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Research Paper Notes 2 | [![Website](https://img.shields.io/website?down_message=offline&up_message=online&url=https%3A%2F%2Fivlabs.github.io%2FResearchPaperNotes%2F)](https://ivlabs.github.io/ResearchPaperNotes/) [![GitHub stars](https://img.shields.io/github/stars/IvLabs/ResearchPaperNotes?style=social)](https://github.com/IvLabs/ResearchPaperNotes/stargazers) 3 | 4 | Initiative to read research papers at [IvLabs](http://www.ivlabs.in/). For interactive reading experience do visit this repo's [GitHub Page](https://ivlabs.github.io/ResearchPaperNotes/). If you like the repo, please star. This motivates us to update the repo frequently. 5 | 6 | **Some tips for doing literature study can be found [here](literature_study_tips)** 7 | 8 | ## List of Topics 9 | To read research paper notes made by IvLabs members, please click on the following topic links. 10 | * [Deep Learning](deep_learning) 11 | * [Dynamics and Controls](dynamics_controls) 12 | * [Localization](localization) 13 | * [Math Concepts](math) 14 | * [Natural Language Processing](natural_language_processing) 15 | * [Reinforcement Learning](reinforcement_learning) 16 | 17 | ## Rules of appending papers: 18 | 19 | *Read [CONTRIBUTING.md](CONTRIBUTING.md) before adding any reasearch paper notes* 20 | - You should thoroughly read the research paper and make proper notes using [HackMD](https://hackmd.io/) or annotate the PDF of the research paper and upload it to the Google Drive. 21 | - **New paper notes should be appended on the top of the table.** 22 | - **While Editing any Markdown table make sure to switch off word wrapping. In case you are using GithHub GUI, use the `No Wrap` setting found in top right corner of text editor.** 23 | - The notes must mention those members who were part of reading group. 24 | - If notes and paper reading are completed, then update the details in the appropriate table below. 25 | - You can then create a pull request to merge the changes. For help on creating pull requests, [refer to this page](https://github.com/IvLabs/resources/tree/master/software). 26 | - If anyone feels that there are changes required in HackMD or PDF notes, please add comments in the notes itself 27 | * [How to add comments for HackMD Notes](https://hackmd.io/s/how-to-use-comments) 28 | * [How to add comments to Google Drive PDFs](https://gsuiteupdates.googleblog.com/2018/02/comment-on-files-in-drive-preview-mode.html) 29 | - These comments will be reviewed and proper actions will be taken by the authors. 30 | -------------------------------------------------------------------------------- /literature_study_tips/Readme.md: -------------------------------------------------------------------------------- 1 | # Literature Study Tips 2 | 3 | *An alternate resource for reading papers can also be found [here](https://drive.google.com/file/d/1d6zA4rUGwZwtD9CNhs07Kzj1yQjpIwIE/view?usp=sharing)* 4 | 5 | # Table of Contents 6 | 7 | * [Reading Research Papers](#reading-research-papers) 8 | * [Steps to follow](#steps-to-follow) 9 | * [Organizing the papers](#organizing-the-papers) 10 | * [Guide for a single paper](#guide-for-a-single-paper) 11 | * [Questions to keep in mind](#questions-to-keep-in-mind) 12 | * [Some Sources of Papers](#some-sources-of-papers) 13 | * [General Tips](#general-tips) 14 | * [Math](#math) 15 | * [Code](#code) 16 | * [Long term advice](#long-term-advice) 17 | 18 | # Reading Research Papers 19 | 20 | These are the notes from the amazing lecture by Andrew Ng, full video can be viewed [here](https://www.youtube.com/watch?v=733m6qBH-jI) 21 | 22 | ## Steps to follow 23 | 24 | ### Organizing the papers 25 | 26 | - Compile a list of resources 27 | - Papers from arxiv and conferences 28 | - Journals 29 | - Medium/Github posts 30 | - Articles/Blogs 31 | - Skim through the list of resources 32 | - Make a table, where rows represent lists of papers and a column for a metric of how much you understood through skimming (10-100%) 33 | - Now read the one with the lowest value of the metric, try to understand the paper, if you can't, go to the references, and read those till you get a basic idea of the paper 34 | - Keep doing this till you have a basic knowledge of the papers 35 | - Then select the papers you feel is worthy enough to be completely read 36 | - Reading around 5-20 papers, you'll have some basic idea of the field and for implementing the works 37 | - Reading around 50-100 papers, you'll have a deep understanding to do in-depth research (it does not mean you have mastered the field :smile:) 38 | 39 | ### Guide for a single paper 40 | 41 | Take multiple passes through the papers 42 | - First Pass 43 | - Read Titles, Abstract, and Figures (only figures can sometimes summarize the entire paper) 44 | - Second Pass 45 | - Read more carefully the Introduction, Conclusion, Figures and then skim through the rest (skip related work if you're not familiar with it in the second pass) 46 | - Third Pass 47 | - Read but skip or skim the Math 48 | - Fourth Pass 49 | - Whole thing but skip parts that don't make sense 50 | 51 | ### Questions to keep in mind 52 | 53 | Ask yourself these questions while reading the paper 54 | - What are the authors trying to accomplish in this work? 55 | - What were the key elements of the approach in this work? 56 | - What can you use yourself? 57 | - What other references do you want to follow? 58 | 59 | ## Some Sources of Papers 60 | 61 | * Top Tier Conferences like NeurIPS, ICLR, CVPR, ICRA, RSS, IROS (more on this [here](https://github.com/IvLabs/resources/tree/master/conferences)) 62 | * Twitter 63 | * Subreddits 64 | * Paper Reading Groups, Communities and Friends 65 | 66 | ## General Tips 67 | 68 | ### Math 69 | 70 | To understand the Maths behind the paper 71 | - Read a few passes and make detailed notes 72 | - Try to rederive the math from scratch on blank paper 73 | - If you can do this, then you can learn to derive your own novel algorithms 74 | - Eg. People from the art community sit in the art museum and they copy the work of the masters 75 | 76 | ### Code 77 | 78 | Download and run the open-source code and try to reimplement it from scratch 79 | 80 | ## Long term advice 81 | 82 | * Keep reading papers consistently 83 | * In doing so you won't gain expertise in one day, you won't get a lot of knowledge from reading one paper a weekend. But if you keep doing this for a year, you'll reach somewhere 84 | * One great project is better than many lame projects 85 | * Focus on the team (people you interact with) 86 | * Maintain a work-life balance 87 | 88 | -------------------------------------------------------------------------------- /dynamics_controls/README.md: -------------------------------------------------------------------------------- 1 | # Dynamics and Controls 2 | 3 | | Papers | Notes | Author | Summary | 4 | |:------:|:-----:|:------:|:-------:| 5 | | [Taut Cable Control of a Tethered UAV](https://folk.ntnu.no/skoge/prost/proceedings/ifac2014/media/files/2581.pdf) | [HackMD](https://hackmd.io/@tethered-aerial-vehicle/SkvWQgfVc) | [Pushkar Dave](https://github.com/lynx1902) | This paper focuses on the design of a stabilizing control law for an aerial vehicle which is physically connected to a ground station by means of a tether cable. | 6 | | [Human-State-Aware Controller for a Tethered Aerial Robot Guiding a Human by Physical Interaction](https://ieeexplore.ieee.org/iel7/7083369/9647862/09684670.pdf) | [Google Drive](https://drive.google.com/file/d/1qifTFWh-TBxmnJQ9WuqUNUoYEsID3mZh/view?usp=sharing) | [Prajyot Jadhav](https://github.com/Arcane-01) | This paper proposes a human-state-aware controller that includes a human's velocity feedback for a tethered aerial robot to guide blind humans. | 7 | | [Gait and Trajectory Optimization for Legged Systems Through Phase-Based End-Effector Parameterization](https://www.researchgate.net/publication/322887667_Gait_and_Trajectory_Optimization_for_Legged_Systems_Through_Phase-Based_End-Effector_Parameterization) | [Google Drive](https://drive.google.com/file/d/1UlShnRpJuN-L8Ucpn-yDmlN9Xsj4qDB1/view?usp=sharing) | [Aditya Shirwatkar](https://github.com/aditya-shirwatkar) | This paper presents a single trajectory optimization formulation for legged locomotion that automatically determines the gait sequence, step timings, footholds, swing-leg motions, and 6D body motion over nonflat terrain, without any additional modules or any handmade heuristics| 8 | | [Automatic Snake Gait Generation Using Model Predictive Control](https://arxiv.org/abs/1909.11204) | [Google Drive](https://drive.google.com/file/d/1sJN2Q16ls0ROZqS76qcz3wMYFTGayL6L/view?usp=sharing) | [Aditya Shirwatkar](https://github.com/aditya-shirwatkar) | This paper proposes a method for generating undulatory gaits for snake robots using a Model Predictive Control approach to automatically generate effective locomotion gaits via trajectory optimization. Furthermore, the proposed method can also produce more complex or irregular gaits, e.g. for obstacle avoidance or executing sharp turns | 9 | | [Integral Sliding Mode Based Switched Structure Control Scheme for Robot Manipulators](https://www.researchgate.net/publication/327807849_Integral_Sliding_Mode_Based_Switched_Structure_Control_Scheme_for_Robot_Manipulators) | [HackMD](https://hackmd.io/INtsyouET5Sxv6K6pIUcoQ?view) | [Saad](https://github.com/saad2121) | This paper proposes the switching scheme between the inverse dynamics based centralized controller and a set of decentralized controllers. ISM is used as perturbation estimator and to provide robustness in front of a wide class of uncertainties | 10 | | [Real-Time Obstacle Avoidance for Manipulators and Mobile Robots](https://link.springer.com/chapter/10.1007/978-1-4613-8997-2_29) | [HackMD](https://hackmd.io/m_dwVyo9TnKIZQa5V7QGRQ?view) | [Saad](https://github.com/saad2121) | This paper proposes the unique real-time obstacle avoidance approach for manipulators and mobile robots using Artifical Potential Field approach | 11 | | [Customizable Three-Dimensional Printed Origami Soft Robotic Joint With Effective Behavior Shaping for Safe Interactions](https://ieeexplore.ieee.org/abstract/document/8481372/keywords#keywords) | [HackMD](https://hackmd.io/@kZ5m8OgNSouLVUfdO4Vu3w/SJtDCMGtU) | [Uddesh](https://github.com/uddeshtople) | A combination of passive stiffness presetting and active PID cascade control was implemented for a pneumatic soft origami rotary actuators (SoRAs) | 12 | | [Hybrid neural network fraction integral terminal sliding mode control of an Inchworm robot manipulator](https://www.sciencedirect.com/science/article/abs/pii/S0888327016300449) | [HackMD](https://hackmd.io/@kZ5m8OgNSouLVUfdO4Vu3w/B1Zd2z_58) | [Uddesh](https://github.com/uddeshtople) | This paper proposes adaptive neural network method to estimate unknown disturbances. Chattering phenomena was reduced by using a neural network | 13 | 14 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | ## Rules for appending papers: 2 | 3 | 1. You should thoroughly read the research paper and make proper notes using [HackMD](https://hackmd.io/). 4 | 2. HackMD notes must mention those members who were part of reading group. 5 | 3. If notes and paper reading are completed, then update the details in the appropriate table below. 6 | 4. You can then create a pull request to merge the changes. For help on creating pull requests, [refer to this page](https://github.com/IvLabs/resources/tree/master/software). 7 | 5. If anyone feels that there are changes required in HackMD notes, please add comments in the notes itself ([How to add comments](https://hackmd.io/s/how-to-use-comments)). 8 | 6. These comments will be reviewed and proper actions will be taken by the authors. 9 | 10 | ____ 11 | 12 | ## Steps to put up an issue: 13 | 14 | 1. Click on the [Issue button](https://github.com/IvLabs/ResearchPaperNotes/issues) at top of the page. 15 | 2. Click new issue and fill up the details! 16 | 17 | **Note: Issues must clearly mention what they are addressing.** 18 | 19 | ____ 20 | 21 | ## Steps to create a pull request: 22 | 23 | In order to contribute, you have to create a Pull Request from your forked repository which is a remote clone of this upstream repository. 24 | 25 | 1. Click the button at the top right hand corner of the screen to fork this repository, don't forget to star the repository! 26 | 27 | 2. Now head over to the forked repository and copy the clone HTTPS URL. 28 | 29 | 3. Next up, clone the forked repository on to the local machine using: `git clone ` 30 | 31 | 4. It is critical to keep your [forked repository in sync with the upstream](https://www.freecodecamp.org/news/how-to-sync-your-fork-with-the-original-git-repository/) repository so merge conflicts can be avoided: 32 | 33 | ```sh 34 | git remote add upstream https://github.com/IvLabs/ResearchPaperNotes.git 35 | git fetch upstream 36 | git pull upstream master 37 | git push 38 | ``` 39 | 40 | 5. Create a separate branch to work on and the branch name must be according to the issue: `git checkout -b ` 41 | 42 | 6. Contributers must follow these guidelines: 43 | 44 | 1. You are encouraged to add paper notes on various topics related to AI and Robotics 45 | 2. All of these should be segregated by sub-topic. 46 | 3. Refer to existing sections before contributing a new one. 47 | 4. Follow the Fork-Commit-Pull Request cycle for contributing, more on this [here](http://github.com/ivlabs/resources/tree/master/software/github#open-source-contributions-with-git). 48 | 5. If you create a new topic folder make sure to **link that folder in landing page `README.md`** 49 | 6. The **name of folder should be consistent with exact format of `word1-word2`**. Some NOT allowed forms are `word1 word2`, `word1word2`, `Word1-word2`, etc. This maintains consistency and proper ordering of folder. 50 | 7. The topic names in [List of Various Fields](https://github.com/comrade-om/ResearchPaperNotes/tree/contributing-guide#list-of-topics) should be in increasing alphabetical order. 51 | 52 | 7. After the contribution work is ready go ahead and add it to staging area: `git add .` 53 | 54 | 8. Now it is time to commit your changes and sync these changes to forked repository: 55 | 56 | ```sh 57 | git commit -m 58 | git push origin 59 | ``` 60 | 61 | 9. Issue a [pull request](https://www.freecodecamp.org/news/how-to-make-your-first-pull-request-on-github/) from forked repo to this repo: 62 | 63 | 1. Head over to `Pull Request` tab in the forked repo and click on `New Pull Request` 64 | 2. Verify base and head repository name and branch names. 65 | 3. Fill in the title and provide a concise description. 66 | 67 | 10. Wait for response on the PR. Congratulations you just contributed to open source! 68 | 69 | ____ 70 | 71 | ## Code of Conduct 72 | 73 | ### Our Pledge 74 | 75 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. 76 | 77 | ### Our Standards 78 | 79 | Examples of behavior that contributes to creating a positive environment include: 80 | 81 | - Using welcoming and inclusive language 82 | - Being respectful of differing viewpoints and experiences 83 | - Gracefully accepting constructive criticism 84 | - Focusing on what is best for the community 85 | - Showing empathy towards other community members 86 | 87 | Examples of unacceptable behavior by participants include: 88 | 89 | - The use of sexualized language or imagery and unwelcome sexual attention or advances 90 | - Trolling, insulting/derogatory comments, and personal or political attacks 91 | - Public or private harassment 92 | - Publishing others' private information, such as a physical or electronic address, without explicit permission 93 | - Other conduct which could reasonably be considered inappropriate in a professional setting 94 | 95 | ### Our Responsibilities 96 | 97 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. 98 | 99 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. 100 | 101 | ____ 102 | 103 | ## Maintainers 104 | 105 | - [Rohit Lal](http://take2rohit.github.io/) 106 | - [Raj Ghugare](https://www.linkedin.com/in/raj-ghugare-917137169) 107 | - [Aditya Shirwatkar](https://in.linkedin.com/in/aditya-shirwatkar-40a956188) 108 | - [Akshay Kulkarni](https://github.com/akshaykvnit) 109 | 110 | ____ 111 | -------------------------------------------------------------------------------- /reinforcement_learning/README.md: -------------------------------------------------------------------------------- 1 | ## Reinforcement Learning 2 | 3 | | Paper | Notes | Author | Summary | 4 | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|:---------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 5 | | [DREAM TO CONTROL: LEARNING BEHAVIORS BY LATENT IMAGINATION](https://arxiv.org/pdf/1912.01603.pdf) (ICLR '20) | [HackMD](https://hackmd.io/@iGBkTz2JQ2eBRM83nuhCuA/Hk9dpK0vd) | [Raj](https://github.com/RajGhugare19) |This paper focuses to learn long-horizon behaviors by propagating analytic value gradients through imagined trajectories using a recurrent state space model (PlaNet, haffner et al) | 6 | | [The Value Equivalence Principle for Model-Based Reinforcement Learning](https://arxiv.org/abs/2011.03506) (NeurIPS '20) | [HackMD](https://hackmd.io/@Raj-Ghugare/HkEY6o9MP) | [Raj](https://github.com/RajGhugare19) |This paper introduces and studies the concept of equivalence for Reinforcement Learning models with respect to a set of policies and value functions. It further shows that this principle can be leveraged to find models constrained by representational capacity, which are better than their maximum likelihood counterparts. | 7 | | [Stackelberg Actor-critic: A game theoretic perspective](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/rJFUQA1QO) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/rJFUQA1QO) | [Sharath](https://sharathraparthy.github.io/) | This paper formulates the interaction between the actor and critic ans a stackelberg games and leverages the implicit function theorem to calculate the accurate gradient updates for actor and critic. | 8 | | [Curriculum learning for Reinforcement Learning Domains](https://arxiv.org/abs/2003.04960) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/Sy0IVj8Ju) | [Sharath](https://sharathraparthy.github.io/) | This is a survey paper on curriculum learning methods in reinforcement learning. | 9 | | [Policy Gradient Methods for Reinforcement Learning with Function Approximation](https://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf) (NIPS 1999) | [HackMD](https://hackmd.io/@Raj-Ghugare/BJGFOdmCL) | [Raj](https://github.com/RajGhugare19) | This paper provides the first policy gradient algorithm based on neural networks. | 10 | | [Reinforcement Learning via Fenchel Rockafellar Duality](https://arxiv.org/abs/2001.01866) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/rkZ5s2Y1P) | [Sharath](https://sharathraparthy.github.io/) | This paper reviews the basic concepts of fenchel duality, f-divergences and shows how can these set of tools can be applied tin the context of reinforcement learning to derive theoritcally as well as practically robust algorithms. | 11 | | [High-Dimensional Continuous Control Using Generalized Advantage Estimation](https://arxiv.org/abs/1506.02438) | [HackMD](https://hackmd.io/3azkwbmgRLSrqyvUHf5SqQ?view) | [Raj](https://github.com/RajGhugare19) | This paper gives an algorithm with an advantage estimator and TRPO technique to empirically guarantee monotonic policy improvement. | 12 | | [Off-Policy Actor-Critic](https://arxiv.org/abs/1205.4839) (ICML '12) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/BkcB-xwvI) | [Sharath](https://sharathraparthy.github.io/) | This paper presents the first off-policy version of the actor-critic algorithms and derives a simple and elegant algorithm which performs better than the existing algorithms on standard reinforcement-learning benchmark problems. | 13 | | [Combining Physical Simulators and Object-Based Networks for Control](https://arxiv.org/pdf/1904.06580.pdf) (ICRA '19) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/Sy6GPG9MB) | [Sharath](https://sharathraparthy.github.io/) | In this paper the authors proposed a hybrid dynamics model, Simulation-Augemented Interaction Networks, where they incorporated Interaction Networks into a physics engine for solving real world complex robotics control tasks. | 14 | | [Learning Agile and Dynamic Motor Skills for Legged Robots](https://arxiv.org/abs/1901.08652) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/ByzYzEhVS) | [Sharath](https://sharathraparthy.github.io/) | This paper tackles the sim2real transfer problem for legged robots. | 15 | | [PAC-Bounds-for-Multi-armed-Bandit](https://link.springer.com/chapter/10.1007/3-540-45435-7_18) (CoLT '02) | [HackMD](https://hackmd.io/saK7DdqCRnyBfN3HykLhlA) | [Raj](https://github.com/RajGhugare19) | This paper provides a technique to guarantee PAC bounds based on the rewards distirbution of the particular problem achieving better sample complexity. | 16 | |[Deep Reinforcement Learning for Dialogue Generation](https://arxiv.org/abs/1606.01541)|[HackMD](https://hackmd.io/@HnlvODbMQIiAlpHchdZpDQ/Sy4VbzgAt)|[Om](https://github.com/DigZator)| This paper discusses how better dialogue generation can be achieved using RL. It provides a technique to convert converstational properties like informativity, coherence and ease of answering into reward functions.| 17 | |[Rainbow: Combining Improvements in Deep Reinforcement Learning](https://arxiv.org/pdf/1710.02298.pdf)|[HackMD](https://hackmd.io/@HnlvODbMQIiAlpHchdZpDQ/BkYl3IkaK)|[Om](https://github.com/DigZator)| The paper discusses add-ons to the DQN and A3C that can improve their performance, namely Double DQN, Prioritized Experience Replay, Dueling Network Architecture, Distributional Q-Learning, Noisy DQN. | 18 | | [The Option-Critic Architecture](https://arxiv.org/abs/1609.05140) | [HackMD](https://hackmd.io/@HnlvODbMQIiAlpHchdZpDQ/SyI7nv7_q) | [Om](https://github.com/DigZator) | Paper discusses the hierarchical reinforcement learning method implimentation based on temporal abstractions. | 19 | | [Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets](https://offline-rl-neurips.github.io/pdf/13.pdf) | [HackMD](https://hackmd.io/@HnlvODbMQIiAlpHchdZpDQ/rkxHo6LL5) | [Om](https://github.com/DigZator) | The paper suggests and provides experimental justification for methods to tackle Distribution Shift. | 20 | | [FeUdal Networks for Hierarchical Reinforcement Learning](https://arxiv.org/abs/1703.01161) | [HackMD](https://hackmd.io/@HnlvODbMQIiAlpHchdZpDQ/HJoIiDw_c) | [Om](https://github.com/DigZator) | This paper describes the FeUdal Network model. Employs a manager-worker hierarchy. | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | Eclipse Public License - v 2.0 2 | 3 | THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE 4 | PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION 5 | OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. 6 | 7 | 1. DEFINITIONS 8 | 9 | "Contribution" means: 10 | 11 | a) in the case of the initial Contributor, the initial content 12 | Distributed under this Agreement, and 13 | 14 | b) in the case of each subsequent Contributor: 15 | i) changes to the Program, and 16 | ii) additions to the Program; 17 | where such changes and/or additions to the Program originate from 18 | and are Distributed by that particular Contributor. A Contribution 19 | "originates" from a Contributor if it was added to the Program by 20 | such Contributor itself or anyone acting on such Contributor's behalf. 21 | Contributions do not include changes or additions to the Program that 22 | are not Modified Works. 23 | 24 | "Contributor" means any person or entity that Distributes the Program. 25 | 26 | "Licensed Patents" mean patent claims licensable by a Contributor which 27 | are necessarily infringed by the use or sale of its Contribution alone 28 | or when combined with the Program. 29 | 30 | "Program" means the Contributions Distributed in accordance with this 31 | Agreement. 32 | 33 | "Recipient" means anyone who receives the Program under this Agreement 34 | or any Secondary License (as applicable), including Contributors. 35 | 36 | "Derivative Works" shall mean any work, whether in Source Code or other 37 | form, that is based on (or derived from) the Program and for which the 38 | editorial revisions, annotations, elaborations, or other modifications 39 | represent, as a whole, an original work of authorship. 40 | 41 | "Modified Works" shall mean any work in Source Code or other form that 42 | results from an addition to, deletion from, or modification of the 43 | contents of the Program, including, for purposes of clarity any new file 44 | in Source Code form that contains any contents of the Program. Modified 45 | Works shall not include works that contain only declarations, 46 | interfaces, types, classes, structures, or files of the Program solely 47 | in each case in order to link to, bind by name, or subclass the Program 48 | or Modified Works thereof. 49 | 50 | "Distribute" means the acts of a) distributing or b) making available 51 | in any manner that enables the transfer of a copy. 52 | 53 | "Source Code" means the form of a Program preferred for making 54 | modifications, including but not limited to software source code, 55 | documentation source, and configuration files. 56 | 57 | "Secondary License" means either the GNU General Public License, 58 | Version 2.0, or any later versions of that license, including any 59 | exceptions or additional permissions as identified by the initial 60 | Contributor. 61 | 62 | 2. GRANT OF RIGHTS 63 | 64 | a) Subject to the terms of this Agreement, each Contributor hereby 65 | grants Recipient a non-exclusive, worldwide, royalty-free copyright 66 | license to reproduce, prepare Derivative Works of, publicly display, 67 | publicly perform, Distribute and sublicense the Contribution of such 68 | Contributor, if any, and such Derivative Works. 69 | 70 | b) Subject to the terms of this Agreement, each Contributor hereby 71 | grants Recipient a non-exclusive, worldwide, royalty-free patent 72 | license under Licensed Patents to make, use, sell, offer to sell, 73 | import and otherwise transfer the Contribution of such Contributor, 74 | if any, in Source Code or other form. This patent license shall 75 | apply to the combination of the Contribution and the Program if, at 76 | the time the Contribution is added by the Contributor, such addition 77 | of the Contribution causes such combination to be covered by the 78 | Licensed Patents. The patent license shall not apply to any other 79 | combinations which include the Contribution. No hardware per se is 80 | licensed hereunder. 81 | 82 | c) Recipient understands that although each Contributor grants the 83 | licenses to its Contributions set forth herein, no assurances are 84 | provided by any Contributor that the Program does not infringe the 85 | patent or other intellectual property rights of any other entity. 86 | Each Contributor disclaims any liability to Recipient for claims 87 | brought by any other entity based on infringement of intellectual 88 | property rights or otherwise. As a condition to exercising the 89 | rights and licenses granted hereunder, each Recipient hereby 90 | assumes sole responsibility to secure any other intellectual 91 | property rights needed, if any. For example, if a third party 92 | patent license is required to allow Recipient to Distribute the 93 | Program, it is Recipient's responsibility to acquire that license 94 | before distributing the Program. 95 | 96 | d) Each Contributor represents that to its knowledge it has 97 | sufficient copyright rights in its Contribution, if any, to grant 98 | the copyright license set forth in this Agreement. 99 | 100 | e) Notwithstanding the terms of any Secondary License, no 101 | Contributor makes additional grants to any Recipient (other than 102 | those set forth in this Agreement) as a result of such Recipient's 103 | receipt of the Program under the terms of a Secondary License 104 | (if permitted under the terms of Section 3). 105 | 106 | 3. REQUIREMENTS 107 | 108 | 3.1 If a Contributor Distributes the Program in any form, then: 109 | 110 | a) the Program must also be made available as Source Code, in 111 | accordance with section 3.2, and the Contributor must accompany 112 | the Program with a statement that the Source Code for the Program 113 | is available under this Agreement, and informs Recipients how to 114 | obtain it in a reasonable manner on or through a medium customarily 115 | used for software exchange; and 116 | 117 | b) the Contributor may Distribute the Program under a license 118 | different than this Agreement, provided that such license: 119 | i) effectively disclaims on behalf of all other Contributors all 120 | warranties and conditions, express and implied, including 121 | warranties or conditions of title and non-infringement, and 122 | implied warranties or conditions of merchantability and fitness 123 | for a particular purpose; 124 | 125 | ii) effectively excludes on behalf of all other Contributors all 126 | liability for damages, including direct, indirect, special, 127 | incidental and consequential damages, such as lost profits; 128 | 129 | iii) does not attempt to limit or alter the recipients' rights 130 | in the Source Code under section 3.2; and 131 | 132 | iv) requires any subsequent distribution of the Program by any 133 | party to be under a license that satisfies the requirements 134 | of this section 3. 135 | 136 | 3.2 When the Program is Distributed as Source Code: 137 | 138 | a) it must be made available under this Agreement, or if the 139 | Program (i) is combined with other material in a separate file or 140 | files made available under a Secondary License, and (ii) the initial 141 | Contributor attached to the Source Code the notice described in 142 | Exhibit A of this Agreement, then the Program may be made available 143 | under the terms of such Secondary Licenses, and 144 | 145 | b) a copy of this Agreement must be included with each copy of 146 | the Program. 147 | 148 | 3.3 Contributors may not remove or alter any copyright, patent, 149 | trademark, attribution notices, disclaimers of warranty, or limitations 150 | of liability ("notices") contained within the Program from any copy of 151 | the Program which they Distribute, provided that Contributors may add 152 | their own appropriate notices. 153 | 154 | 4. COMMERCIAL DISTRIBUTION 155 | 156 | Commercial distributors of software may accept certain responsibilities 157 | with respect to end users, business partners and the like. While this 158 | license is intended to facilitate the commercial use of the Program, 159 | the Contributor who includes the Program in a commercial product 160 | offering should do so in a manner which does not create potential 161 | liability for other Contributors. Therefore, if a Contributor includes 162 | the Program in a commercial product offering, such Contributor 163 | ("Commercial Contributor") hereby agrees to defend and indemnify every 164 | other Contributor ("Indemnified Contributor") against any losses, 165 | damages and costs (collectively "Losses") arising from claims, lawsuits 166 | and other legal actions brought by a third party against the Indemnified 167 | Contributor to the extent caused by the acts or omissions of such 168 | Commercial Contributor in connection with its distribution of the Program 169 | in a commercial product offering. The obligations in this section do not 170 | apply to any claims or Losses relating to any actual or alleged 171 | intellectual property infringement. In order to qualify, an Indemnified 172 | Contributor must: a) promptly notify the Commercial Contributor in 173 | writing of such claim, and b) allow the Commercial Contributor to control, 174 | and cooperate with the Commercial Contributor in, the defense and any 175 | related settlement negotiations. The Indemnified Contributor may 176 | participate in any such claim at its own expense. 177 | 178 | For example, a Contributor might include the Program in a commercial 179 | product offering, Product X. That Contributor is then a Commercial 180 | Contributor. If that Commercial Contributor then makes performance 181 | claims, or offers warranties related to Product X, those performance 182 | claims and warranties are such Commercial Contributor's responsibility 183 | alone. Under this section, the Commercial Contributor would have to 184 | defend claims against the other Contributors related to those performance 185 | claims and warranties, and if a court requires any other Contributor to 186 | pay any damages as a result, the Commercial Contributor must pay 187 | those damages. 188 | 189 | 5. NO WARRANTY 190 | 191 | EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT 192 | PERMITTED BY APPLICABLE LAW, THE PROGRAM IS PROVIDED ON AN "AS IS" 193 | BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR 194 | IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF 195 | TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR 196 | PURPOSE. Each Recipient is solely responsible for determining the 197 | appropriateness of using and distributing the Program and assumes all 198 | risks associated with its exercise of rights under this Agreement, 199 | including but not limited to the risks and costs of program errors, 200 | compliance with applicable laws, damage to or loss of data, programs 201 | or equipment, and unavailability or interruption of operations. 202 | 203 | 6. DISCLAIMER OF LIABILITY 204 | 205 | EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT 206 | PERMITTED BY APPLICABLE LAW, NEITHER RECIPIENT NOR ANY CONTRIBUTORS 207 | SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 208 | EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST 209 | PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 210 | CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 211 | ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE 212 | EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE 213 | POSSIBILITY OF SUCH DAMAGES. 214 | 215 | 7. GENERAL 216 | 217 | If any provision of this Agreement is invalid or unenforceable under 218 | applicable law, it shall not affect the validity or enforceability of 219 | the remainder of the terms of this Agreement, and without further 220 | action by the parties hereto, such provision shall be reformed to the 221 | minimum extent necessary to make such provision valid and enforceable. 222 | 223 | If Recipient institutes patent litigation against any entity 224 | (including a cross-claim or counterclaim in a lawsuit) alleging that the 225 | Program itself (excluding combinations of the Program with other software 226 | or hardware) infringes such Recipient's patent(s), then such Recipient's 227 | rights granted under Section 2(b) shall terminate as of the date such 228 | litigation is filed. 229 | 230 | All Recipient's rights under this Agreement shall terminate if it 231 | fails to comply with any of the material terms or conditions of this 232 | Agreement and does not cure such failure in a reasonable period of 233 | time after becoming aware of such noncompliance. If all Recipient's 234 | rights under this Agreement terminate, Recipient agrees to cease use 235 | and distribution of the Program as soon as reasonably practicable. 236 | However, Recipient's obligations under this Agreement and any licenses 237 | granted by Recipient relating to the Program shall continue and survive. 238 | 239 | Everyone is permitted to copy and distribute copies of this Agreement, 240 | but in order to avoid inconsistency the Agreement is copyrighted and 241 | may only be modified in the following manner. The Agreement Steward 242 | reserves the right to publish new versions (including revisions) of 243 | this Agreement from time to time. No one other than the Agreement 244 | Steward has the right to modify this Agreement. The Eclipse Foundation 245 | is the initial Agreement Steward. The Eclipse Foundation may assign the 246 | responsibility to serve as the Agreement Steward to a suitable separate 247 | entity. Each new version of the Agreement will be given a distinguishing 248 | version number. The Program (including Contributions) may always be 249 | Distributed subject to the version of the Agreement under which it was 250 | received. In addition, after a new version of the Agreement is published, 251 | Contributor may elect to Distribute the Program (including its 252 | Contributions) under the new version. 253 | 254 | Except as expressly stated in Sections 2(a) and 2(b) above, Recipient 255 | receives no rights or licenses to the intellectual property of any 256 | Contributor under this Agreement, whether expressly, by implication, 257 | estoppel or otherwise. All rights in the Program not expressly granted 258 | under this Agreement are reserved. Nothing in this Agreement is intended 259 | to be enforceable by any entity that is not a Contributor or Recipient. 260 | No third-party beneficiary rights are created under this Agreement. 261 | 262 | Exhibit A - Form of Secondary Licenses Notice 263 | 264 | "This Source Code may also be made available under the following 265 | Secondary Licenses when the conditions for such availability set forth 266 | in the Eclipse Public License, v. 2.0 are satisfied: {name license(s), 267 | version(s), and exceptions or additional permissions here}." 268 | 269 | Simply including a copy of this Agreement, including this Exhibit A 270 | is not sufficient to license the Source Code under Secondary Licenses. 271 | 272 | If it is not possible or desirable to put the notice in a particular 273 | file, then You may include the notice in a location (such as a LICENSE 274 | file in a relevant directory) where a recipient would be likely to 275 | look for such a notice. 276 | 277 | You may add additional accurate notices of copyright ownership. 278 | -------------------------------------------------------------------------------- /deep_learning/README.md: -------------------------------------------------------------------------------- 1 | ## Deep Learning 2 |
Deep Learning Subtopics 3 |

4 |

19 |

20 |
21 | 22 | **IMPORTANT INSTRUCTION**: While Editing this Markdown make sure to switch off word wrapping. In case you are using GitHub GUI, use the `No Wrap` setting found in top right corner of text editor. 23 | 24 | ### Domain Adaptation 25 | 26 | | Paper | Notes | Author | Summary | 27 | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:---------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 28 | | [Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation](https://bit.ly/3wvqAEH) (2021) | [GDrive](https://bit.ly/3wvqAEH) | [Rohit](https://rohitlal.net/) | Author propose a contrastive learning approach that adapts category-wise centroids across domains. They extend the method with self-training, where they use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels. | 29 | | [FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation](https://arxiv.org/pdf/2011.09230.pdf) (CVPR '21) | [HackMD](https://hackmd.io/@take2rohit/B1I3Wr2wd) | [Rohit](https://rohitlal.net/) | Propose a Unsupervised Domain Randomization method that effectively handles large domain discrepancies | 30 | | [Universal Domain Adaptation through Self-Supervision](https://arxiv.org/pdf/2002.07953.pdf) (NeurIPS 2020) | [HackMD](https://hackmd.io/@take2rohit/S1ywXvOD_) | [Rohit](https://rohitlal.net/) | Author propose a universally applicable domain adaptation framework that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE) | 31 | | [Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation](http://proceedings.mlr.press/v119/liang20a/liang20a.pdf) (ICML 2020) | [HackMD](https://hackmd.io/@take2rohit/HJI4trNDO) | [Rohit](https://rohitlal.net/) | Work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems | 32 | | [Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation](https://arxiv.org/pdf/2007.07077.pdf) (WACV 21) | [HackMD](https://hackmd.io/@take2rohit/SJTomaLUd) | [Rohit](https://rohitlal.net/) | Paper propose a novel unsupervised MTDA approach to train a CNN that can generalize well across multiple target domains. | 33 | | [Consistency Regularization with High-dimensional Non-adversarial Source-guided Perturbation for Unsupervised Domain Adaptation in Segmentation](https://arxiv.org/abs/2009.08610) (AAAI '21) | [HackMD](https://hackmd.io/@akshayk07/B1rmFchNu) | [Akshay](https://akshayk07.weebly.com/) | They propose a bidirectional style-induced DA method (BiSIDA) that employs consistency regularization to efficiently exploit information from the unlabeled target dataset using a simple neural style transfer model. | 34 | | [Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation](https://arxiv.org/abs/2101.10979) (CVPR '21) | [HackMD](https://hackmd.io/@akshayk07/r1FE511xd) | [Akshay](https://akshayk07.weebly.com/) | They propose *ProDA* which resorts to prototypes to online denoise the pseudo-labels and learn a compact target feature space. Using knowledge distillation to a self-supervised pretrained model further boosts the performance. | 35 | | [Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for Semantic Segmentation](https://papers.nips.cc/paper/2020/file/7a9a322cbe0d06a98667fdc5160dc6f8-Paper.pdf) (NeurIPS '20) | [HackMD](https://hackmd.io/@akshayk07/Sy2Msh7sw) | [Akshay](https://akshayk07.weebly.com/) | This work investigates open compound domain adaptation (OCDA) for semantic segmentation which deals with mixed and novel situations at the same time. They first cluster the compound target data based on style (discover), then hallucinate multiple latent target domains in source using image translation, and perform target-to-source alignment separately between domains (adapt). | 36 | | [Domain Adaptive Semantic Segmentation Using Weak Labels](https://arxiv.org/abs/2007.15176) (ECCV '20) | [HackMD](https://hackmd.io/@akshayk07/rydQyAVHv) | [Akshay](https://akshayk07.weebly.com/) | This paper proposes a framework for Domain Adaptation (DA) in semantic segmentation with image-level weak labels in the target domain. They use weak labels to enable the interplay between feature alignment and pseudo-labeling, improving both in DA. | 37 | | [DACS: Domain Adaptation via Cross-domain Mixed Sampling](https://arxiv.org/abs/2007.08702) (WACV '21) | [HackMD](https://hackmd.io/@akshayk07/ByhfvJ7XP) | [Akshay](https://akshayk07.weebly.com/) | This paper proposes Domain Adaptation via Cross-domain Mixed Sampling which mixes images from two domains along with their corresponding labels. These mixed samples are trained on, along with the labelled data itself. | 38 | | [Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation](https://openaccess.thecvf.com/content_CVPR_2020/html/Kim_Learning_Texture_Invariant_Representation_for_Domain_Adaptation_of_Semantic_Segmentation_CVPR_2020_paper.html) (CVPR '20) | [HackMD](https://hackmd.io/@akshayk07/B167fmyGD) | [Akshay](https://akshayk07.weebly.com/) | This paper uses style transfer to enforce texture invariance in the model, followed by self training to adapt to the target domain texture for the semantic segmentation task. | 39 | | [Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision](https://arxiv.org/abs/2004.07703) (CVPR '20 Oral) | [HackMD](https://hackmd.io/@akshayk07/SkwXI-jkP) | [Akshay](https://akshayk07.weebly.com/) | This paper proposes a two-step self-supervised DA approach to minimize the inter-domain and intra-domain gap together. | 40 | | [Unsupervised Domain Adaptation with Residual Transfer Networks](https://papers.nips.cc/paper/6110-unsupervised-domain-adaptation-with-residual-transfer-networks.pdf) (NIPS '16) | [HackMD](https://hackmd.io/@akshayk07/S1O9iopRU) | [Akshay](https://akshayk07.weebly.com/) | A domain adaptation approach that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. | 41 | | [Phase Consistent Ecological Domain Adaptation](https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_Phase_Consistent_Ecological_Domain_Adaptation_CVPR_2020_paper.html) (CVPR '20) | [HackMD](https://hackmd.io/@akshayk07/HkRSZC00I) | [Akshay](https://akshayk07.weebly.com/) | This paper introduces 2 criteria to regularize the optimization involved in UDA: (1) the map between 2 image domains should be phase-preserving and (2) to leverage regularities in the scene, regardless of the illuminant or imaging sensor. | 42 | | [FDA: Fourier Domain Adaptation for Semantic Segmentation](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_FDA_Fourier_Domain_Adaptation_for_Semantic_Segmentation_CVPR_2020_paper.pdf) (CVPR '20) | [HackMD](https://hackmd.io/@akshayk07/SkktSZC0L) | [Akshay](https://akshayk07.weebly.com/) | A simple method for UDA where the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other. | 43 | | [Domain Adaptation for Structured Output via Discriminative Patch Representations](https://arxiv.org/abs/1901.05427) (ICCV '19) | [HackMD](https://hackmd.io/Nh2sTmn1RpSeytghA6E2JQ) | [Akshay](https://akshayk07.weebly.com/) | This paper proposes a UDA approach that explicitly discovers many modes in the structured output space of semantic segmentation to learn a better discriminator between the 2 domains, ultimately leading to a better domain alignment. | 44 | 45 | ### Semantic Segmentation 46 | 47 | | Paper | Notes | Author | Summary | 48 | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------:|:---------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 49 | | [Semi-Supervised Semantic Segmentation with Cross-Consistency Training](https://openaccess.thecvf.com/content_CVPR_2020/papers/Ouali_Semi-Supervised_Semantic_Segmentation_With_Cross-Consistency_Training_CVPR_2020_paper.pdf) (CVPR '20) | [HackMD](https://hackmd.io/@akshayk07/B1uYpeMNw) | [Akshay](https://akshayk07.weebly.com/) | This paper proposes cross-consistency training, where an invariance of the predictions is enforced over different perturbations applied to the outputs of the encoder (in a shared encoder and multiple decoder architecture). | 50 | | [Gated-SCNN: Gated Shape CNNs for Semantic Segmentation](http://openaccess.thecvf.com/content_ICCV_2019/html/Takikawa_Gated-SCNN_Gated_Shape_CNNs_for_Semantic_Segmentation_ICCV_2019_paper.html) (ICCV '19) | [HackMD](https://hackmd.io/@akshayk07/ryhzTGJor) | [Akshay](https://akshayk07.weebly.com/) | This paper presents a 2-stream CNN i.e. one stream is normal CNN (classical stream) while the other is a shape stream, which explicitly processes shape information in a separate stream. | 51 | | [ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation](https://arxiv.org/abs/1606.02147) | [HackMD](https://hackmd.io/@akshayk07/rJ4NL3sTB) | [Akshay](https://akshayk07.weebly.com/) | This paper presents a network architecture which is faster and more compact, for low real-time inference times. | 52 | | [W-Net: A Deep Model for Fully Unsupervised Image Segmentation](https://arxiv.org/abs/1711.08506) | [HackMD](https://hackmd.io/@akshayk07/By3JgvYqB) | [Akshay](https://akshayk07.weebly.com/) | This paper presents fully unsupervised semantic segmentation using deep networks and a soft version of Normalized Cut. | 53 | | [Understanding Deep Learning Techniques for Image Segmentation](https://arxiv.org/abs/1907.06119) | [HackMD](https://hackmd.io/@akshayk07/HkfeY3EqH) | [Akshay](https://akshayk07.weebly.com/) | This paper aims to provide an intuitive understanding of significant DL-based approaches to segmentation. | 54 | | [Recent progress in semantic image segmentation](https://arxiv.org/ftp/arxiv/papers/1809/1809.10198.pdf) | [HackMD](https://hackmd.io/@akshayk07/B1lv_WN9B) | [Akshay](https://akshayk07.weebly.com/) | This paper presents a review on semantic segmentation approaches - traditional as well as DL-based. | 55 | 56 | ### Knowledge Distillation 57 | 58 | | Paper | Notes | Author | Summary | 59 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:---------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 60 | | [Distilling the Knowledge in a Neural Network](https://arxiv.org/pdf/1503.02531.pdf) (NIPS '14W) | [HackMD](https://hackmd.io/AntG2tWLQw-dflF5Y1fXig) | [Raj](https://github.com/RajGhugare19) | This paper is the first DL approach to transfer knowledge from a teacher network to a student network, and uses softened outputs of the teacher network for training the student network. | 61 | | [A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning](http://openaccess.thecvf.com/content_cvpr_2017/papers/Yim_A_Gift_From_CVPR_2017_paper.pdf) (CVPR '17) | [HackMD](https://hackmd.io/@akshayk07/rkj6RFc28) | [Akshay](https://akshayk07.weebly.com/) | This paper formulates the knowledge to be transferred in terms of flow between layers, calculates it as the inner product between feature maps from 2 layers, and uses this for Knowledge Distillation. | 62 | | [Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer](https://arxiv.org/abs/1612.03928) (ICLR '17) | [HackMD](https://hackmd.io/@akshayk07/BkzGciz38) | [Akshay](https://akshayk07.weebly.com/) | This paper defines attention for CNNs, and uses it to improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. | 63 | | [Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model](https://arxiv.org/abs/2003.13960) (CVPR '20) | [HackMD](https://hackmd.io/nwM8AKmtStGStXbRWVQnrg) | [Akshay](https://akshayk07.weebly.com/) | This paper proposes to blend active learning ([Gissin and Shalev-Shwartz, 2019](https://arxiv.org/abs/1907.06347)) and image mixup ([Zhang et. al. 2017](https://arxiv.org/abs/1710.09412)) to tackle data-efficient knowledge distillation from a blackbox teacher model. | 64 | | [Data-Free Learning of Student Networks](https://arxiv.org/abs/1904.01186) (ICCV '19) | [HackMD](https://hackmd.io/LMTITxOtSlmrLi877J3Ntg) | [Akshay](https://akshayk07.weebly.com/) | The pre-trained teacher network is considered as a fixed discriminator and a generator generates training samples which can obtain maximum response from the discriminator. Simultaneously, a smaller network is trained using the generated data and the teacher network. | 65 | 66 | ### Active Learning 67 | 68 | | Paper | Notes | Author | Summary | 69 | |:--------------------------------------------------------------------------------------:|:--------------------------------------------------:|:---------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------:| 70 | | [Cost-Effective REgion-based Active Learning for Semantic Segmentation](http://bmvc2018.org/contents/papers/0437.pdf) (BMVC '18) | [HackMD](https://hackmd.io/@akshayk07/Byasc3i5v) | [Akshay](https://akshayk07.weebly.com/) | This paper introduces an active learning strategy for semantic segmentation that uses an information measure and an annotation cost estimate. | 71 | | [Variational Adversarial Active Learning](https://arxiv.org/abs/1904.00370) (ICCV '19) | [HackMD](https://hackmd.io/CxZNGh6dS3m2axmP50iN8g) | [Akshay](https://akshayk07.weebly.com/) | This paper introduces a pool-based active learning strategy which learns a low dimensional latent space from labeled and unlabeled data using a VAE. | 72 | 73 | ### Feature Detection and Description 74 | 75 | | Paper | Notes | Author | Summary | 76 | |:-----------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 77 | | [Patch2Pix: Epipolar - Guided Pixel - Level Correspondences](https://arxiv.org/pdf/2012.01909.pdf) | [HackMD](https://hackmd.io/@GaurArihant/BJ5qmZZlu) | [Arihant](https://flagarihant2000.github.io/arihantgaur/), [Saurabh](https://saurabhkemekar.github.io/Saurabh-Kemekar/) |This paper proposes a new method on determining pixel-level correspondences in a detect - to - refine manner. It follows a weakly supervised learning approach, guided by the epipolar geometry of input image pair. | 78 | | [Neighbourhood Consensus Networks](https://arxiv.org/pdf/1810.10510.pdf) (NeurIPS '18) | [HackMD](https://hackmd.io/@GaurArihant/S1NAT2QJO) | [Arihant](https://flagarihant2000.github.io/arihantgaur/), [Saurabh](https://saurabhkemekar.github.io/Saurabh-Kemekar/) |The paper proposes an end - to - end trainable CNN architecture identifying consistent matches by analysing neighbourhood consensus patterns. The paper also demonstrates the use of weak supervision as matching and non - matching pairs, rather than using manual annotations. | 79 | | [D2 Net - A Trainable CNN for Joint Description and Detection of Local Features](https://arxiv.org/abs/1905.03561) (CVPR '19) | [HackMD](https://hackmd.io/@AniketGujarathi/SywvV8iQD) | [Aniket Gujarathi](https://www.linkedin.com/in/aniket-gujarathi/?originalSubdomain=in) | This paper introduces a Deep Learning based approach to solve the problem of local features detection and description using the detect-and-describe approach instead of the traditionally used detect-then-describe approach. | 80 | 81 | ### Unsupervised Learning 82 | 83 | | Paper | Notes | Author | Summary | 84 | |:-----------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 85 | | [Deep Clustering for Unsupervised Learning of Visual Features](https://arxiv.org/pdf/1807.05520.pdf) (ECCV 2018) | [HackMD](https://hackmd.io/@take2rohit/HJ4CneLDd) | [Rohit](https://rohitlal.net/) | A clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features | 86 | | [Augmented Autoencoders: Implicit 3D Orientation Learning for 6D Object Detection](https://arxiv.org/pdf/1902.01275.pdf) (ECCV '18) | [HackMD](https://hackmd.io/@6GX-kbOaSt6hNkpWQyj20A/r1tnl1gQD) | [Aayush](https://github.com/aayush-fadia), [Jayesh](https://github.com/jayeshk7), [Saketh](https://github.com/sakethbachu) | This paper presents a real-time RGB-based pipeline for object detection and 6D pose estimation, based on a variant of denoising autoencoder, which is an augmented encoder trained on views of a 3D model using domain randomization. | 87 | | [A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/abs/2002.05709) (ICML 2020) | [HackMD](https://hackmd.io/@mathurpulkit/HJX91MnJK) | [Pulkit](https://github.com/mathurpulkit) | This paper provides a simpler and a more efficient way of Contrastive Self-Supervised Learning without using specialised architectures. The paper uses design methods from various architectures and incorporates them to achieve SOTA performance on Representation Learning. | 88 | 89 | ### Object Detection 90 | 91 | | Paper | Notes | Author | Summary | 92 | |:-----------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 93 | | [Fast R - CNN](https://ieeexplore.ieee.org/document/7410526) (ICCV '15) | [HackMD](https://hackmd.io/@siddxsingh/S1wWhX_wd/edit) | [Arihant](https://flagarihant2000.github.io/arihantgaur/), [Saketh](https://github.com/sakethbachu), [Siddharth](https://www.linkedin.com/in/siddharth-s-8a63a4120/) | This paper is an extension of R - CNN, with 213 times faster test time than R - CNN and 10 times faster than SPPNet.| 94 | | [Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition](https://arxiv.org/abs/1406.4729.pdf) (TPAMI '15) | [HackMD](https://hackmd.io/@dl-CpoNoTiysMYjEsQTDOw/BkYRf5gxd) | [Arihant](https://flagarihant2000.github.io/arihantgaur/), [Saketh](https://github.com/sakethbachu), [Siddharth](https://www.linkedin.com/in/siddharth-s-8a63a4120/) | The paper proposes a workaround to feeding a fixed size input to CNNs. Resizing the input can lead to reduction in recognition accuracy for images/sub - images. Cropping and resizing often result in unwanted geometric distortions. The authors add a ‘spatial pyramid pooling’ layer after convolution layers to remove the fixed size constraint to network.| 95 | | [OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks](https://arxiv.org/pdf/1312.6229.pdf) (ICLR '14) | [HackMD](https://hackmd.io/@6GX-kbOaSt6hNkpWQyj20A/BJOwpQ67v) | [Jayesh](https://github.com/jayeshk7), [Saketh](https://github.com/sakethbachu) | This paper presents a framework for classification, localization and detection of objects using a multiscale and sliding window approach. It can do mutiple tasks using a single shared network. Second important findings of this paper is explaining how ConvNets can be effectively used for detection and localization tasks. | 96 | | [Rich feature hierarchies for accurate object detection and semantic segmentation](https://arxiv.org/abs/1311.2524) (CVPR '14) | [HackMD](https://hackmd.io/@6GX-kbOaSt6hNkpWQyj20A/BylBkiYYv) | [Jayesh](https://github.com/jayeshk7), [Saketh](https://github.com/sakethbachu) | This paper proposes a framework that handles object detection task in two steps, first being generation of region proposals in order to localize and segment objects and the second steps is about classifying these objects.This method improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 achieving a mAP of 53.3%. | 97 | | [You Only Look Once: Unified, Real-Time Object Detection](https://drive.google.com/file/d/1snL7fmrkU8XuHoSLeb-iXoWLmsW-EQfQ/view?usp=sharing) | [GDrive](https://drive.google.com/file/d/1snL7fmrkU8XuHoSLeb-iXoWLmsW-EQfQ/view?usp=sharing) | [Rohit](https://rohitlal.net/) | Most popular object detection algorithm. It frames object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. | 98 | 99 | ### Curriculum learning 100 | 101 | | Paper | Notes | Author | Summary | 102 | |:-----------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 103 | | [When do curricula work?](https://openreview.net/forum?id=tW4QEInpni) (ICLR '21) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/rk-wOnG1u) | [Sharath](https://sharathraparthy.github.io/) | This paper conducted a large scale study on curriculum learning methods for supervised learning setting and made some interesting comments on when curricula is effective. | 104 | 105 | ### Bayesian Neural Networks 106 | 107 | | Paper | Notes | Author | Summary | 108 | |:-----------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 109 | | [Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning](https://arxiv.org/abs/1907.00865) (AISTATS '20) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/SySL97y3D) | [Sharath](https://sharathraparthy.github.io/) | This paper studies a famous soap bubble problem in high-dimensional probability spaces and how Mean field variational inference sufferes from this. As a work around for this, the paper proposes a new posterior approximation in hyperspherical coordinate system. | 110 | 111 | ### Causality 112 | 113 | | Paper | Notes | Author | Summary | 114 | |:-----------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 115 | | [Recurrent Independent Mechanisms](https://arxiv.org/pdf/1909.10893.pdf) (ICLR' 21) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/BkydH2cx_) | [Sharath](https://sharathraparthy.github.io/) | This paper proposes a new recurrent architecture which takes into account the modularity and independence and shows how this helps in generalisation. | 116 | 117 | ### Anomaly Detection 118 | 119 | | Paper | Notes | Author | Summary | 120 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:---------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 121 | | [Abnormal Event Detection in Videos using Spatio Temporal Autoencoder](https://arxiv.org/pdf/1701.01546.pdf) | [HackMD](https://hackmd.io/@iGBkTz2JQ2eBRM83nuhCuA/H1J4PB6l_) | [Raj](https://github.com/RajGhugare19) | This paper proposes a new architecture for anomaly detection in videos. Their architecture includes two main components one for spatial feature representation, and one for learning the temporal evolution of these spatial features. | 122 | 123 | ### Generative Adversarial Nets 124 | 125 | | Paper | Notes | Author | Summary | 126 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:---------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 127 | | [Generative Adversarial Nets](https://arxiv.org/pdf/1406.2661.pdf)(Neurips '14) | [HackMD](https://hackmd.io/@iGBkTz2JQ2eBRM83nuhCuA/H1SU1W6WO) | [Raj](https://github.com/RajGhugare19) | This paper proposes a novel "adversarial method" for data generation. It is now considered as one of the classics in deep learning. | 128 | | [Disentangled Inference for GANs with Latently Invertible Autoencoder](https://arxiv.org/pdf/1906.08090.pdf)| [HackMD](https://hackmd.io/@gaEyWwreTOqh_Vk_GkFMkw/ryJJKvFTt) | [Vignesh](https://github.com/vignesh-creator) | This paper proposes a novel generative model named Latently Invertible Autoencoder (LIA) which tackles the entanglement problem(which often occurs in GANs) and generates high quality images from disentangled latents.| 129 | 130 | 131 | 132 | ### Recurrent Networks 133 | 134 | | Paper | Notes | Author | Summary | 135 | |:-----------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 136 | | [Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/B1mLneEMu) | [HackMD](https://hackmd.io/@FtbpSED3RQWclbmbmkChEA/B1mLneEMu) | [Sharath](https://sharathraparthy.github.io/) | This paper analyses the dynamics of RNNs leveraging some dynamical systems theory. | 137 | 138 | ### Real-world DL applications 139 | 140 | | Paper | Notes | Author | Summary | 141 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:---------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| 142 | | [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832v3.pdf) | [HackMD](https://hackmd.io/@ABD/SJa0J7_Od) | [Muhammed Abdullah](https://github.com/ABD-01) | This paper introduces the famous Triplet Loss function to training deep networks for face verification, recognition and clustering. It also achieved the state-of-the-art accuracy. | 143 | | [On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vector](https://arxiv.org/abs/2005.02000) (IJCNN '20) | [HackMD](https://hackmd.io/TAU7CmEcSbalI2xYJlCg9g?view) | [Arihant](https://flagarihant2000.github.io/arihantgaur/), [Prasad](https://prasadvagdargi.github.io/), [Saketh](https://github.com/sakethbachu) | This paper states that the current methods for computer - aided diagnosis are not widely accepted due to their obscure nature. The main aim of this paper is to design a deep learning model that will be trained to make decisions similar to medical experts. Concept Activation vectors are used to map human understandable concepts to RECOD images. The results shows that the classifier learns and encodes human understandable concepts in its latent representation. | 144 | | [Concept Learning with Energy - Based Models](https://arxiv.org/abs/1811.02486) | [HackMD](https://hackmd.io/@GaurArihant/B1oxQG2d_) | [Arihant](https://flagarihant2000.github.io/arihantgaur/), [Saketh](https://github.com/sakethbachu) | Multiple aspects or hallmarks of human intelligence require the ability to convert experience into concepts. In this paper the aspect of representing these concepts in the form of an energy function is proposed. The entire framework is evaluated on learning visual, quantitative, relational, and temporal concepts in an unsupervised way. | 145 | 146 | --------------------------------------------------------------------------------