└── README.md
/README.md:
--------------------------------------------------------------------------------
1 | # Deep Tutorials for [PyTorch](https://pytorch.org)
2 |
3 | This is a series of in-depth tutorials I'm writing for implementing cool deep learning models on your own with the amazing PyTorch library.
4 |
5 | Basic knowledge of PyTorch and neural networks is assumed.
6 |
7 | If you're new to PyTorch, first read [Deep Learning with PyTorch: A 60 Minute Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) and [Learning PyTorch with Examples](https://pytorch.org/tutorials/beginner/pytorch_with_examples.html).
8 |
9 | ---
10 |
11 | **24 Apr 2023**: I've just completed the [Super-Resolution](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Super-Resolution) and [Transformers](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Transformers) tutorials.
12 |
13 | **09 Dec 2023**: Interested in chess or transformers? Check out [Chess Transformers](https://github.com/sgrvinod/chess-transformers).
14 |
15 | ---
16 |
17 | In each tutorial, we will focus on a specific application or area of interest by implementing a model from a research paper.
18 |
19 | Application | Paper | Tutorial | Also Learn About | Status
20 | :---: | :---: | :---: | :---: | :---:
21 | Image Captioning | [_Show, Attend, and Tell_](https://arxiv.org/abs/1502.03044) | [a PyTorch Tutorial to Image Captioning](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning) | • encoder-decoder architecture
• attention
• transfer learning
• beam search | 🟢
*complete*
22 | Sequence Labeling | [_Empower Sequence Labeling with Task-Aware Neural Language Model_](https://arxiv.org/abs/1709.04109) | [a PyTorch Tutorial to Sequence Labeling](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Sequence-Labeling) | • language models
• character RNNs
• multi-task learning
• conditional random fields
• Viterbi decoding
• highway networks | 🟢
*complete*
23 | Object Detection | [_SSD: Single Shot MultiBox Detector_](https://arxiv.org/abs/1512.02325) | [a PyTorch Tutorial to Object Detection](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection) | • single-shot detection
• multiscale feature maps
• priors
• multibox
• hard negative mining
• non-maximum suppression | 🟢
*complete*
24 | Text Classification | [_Hierarchical Attention Networks for Document Classification_](https://www.semanticscholar.org/paper/Hierarchical-Attention-Networks-for-Document-Yang-Yang/1967ad3ac8a598adc6929e9e6b9682734f789427) | [a PyTorch Tutorial to Text Classification](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Text-Classification) | • hierarchical attention | 🟡
*code complete*
25 | Super-Resolution | [_Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network_](https://arxiv.org/abs/1609.04802) | [a PyTorch Tutorial to Super-Resolution](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Super-Resolution) | • **GANs** — this is also a GAN tutorial
• residual connections
• sub-pixel convolution
• perceptual loss | 🟢
*complete*
26 | Machine Translation | [_Attention Is All You Need_](https://arxiv.org/abs/1706.03762) | [a PyTorch Tutorial to Transformers](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Transformers) | • **transformers**
• multi-head attention
• positional embeddings
• encoder-decoder architecture
• byte pair encoding
• beam search | 🟢
*complete*
27 | Semantic Segmentation | [_SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers_](https://arxiv.org/abs/2105.15203) | a PyTorch Tutorial to Semantic Segmentation | N/A | 🔴
*planned*
28 |
29 |
--------------------------------------------------------------------------------