├── LICENSE ├── README.md ├── notebooks ├── code_checklist-analysis.Rmd ├── code_checklist-analysis.pdf └── code_checklist-neurips2019.csv └── templates └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Papers with code 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Tips for Publishing Research Code 2 | 3 | 4 | 5 | **💡 Collated best practices from most popular ML research repositories - *now official guidelines at NeurIPS 2021!*** 6 | 7 | Based on analysis of more than 200 Machine Learning repositories, these recommendations facilitate reproducibility and correlate with GitHub stars - for more details, see our [our blog post](https://medium.com/paperswithcode/ml-code-completeness-checklist-e9127b168501). 8 | 9 | For NeurIPS 2021 code submissions it is recommended (but not mandatory) to use the [README.md template](templates/README.md) and check as many items on the ML Code Completeness Checklist (described below) as possible. 10 | 11 | ## 📋 README.md template 12 | 13 | We provide a [README.md template](templates/README.md) that you can use for releasing ML research repositories. The sections in the template were derived by looking at existing repositories, seeing which had the best reception in the community, and then looking at common components that correlate with popularity. 14 | 15 | ## ✓ ML Code Completeness Checklist 16 | 17 | We compiled this checklist by looking at what's common to the most popular ML research repositories. In addition, we prioritized items that facilitate reproducibility and make it easier for others build upon research code. 18 | 19 | The ML Code Completeness Checklist consists of five items: 20 | 21 | 1. **Specification of dependencies** 22 | 2. **Training code** 23 | 3. **Evaluation code** 24 | 4. **Pre-trained models** 25 | 5. **README file including table of results accompanied by precise commands to run/produce those results** 26 | 27 | We verified that repositories that check more items on the checklist also tend to have a higher number of GitHub stars. This was verified by analysing official NeurIPS 2019 repositories - more details in the [blog post](https://medium.com/paperswithcode/ml-code-completeness-checklist-e9127b168501). We also provide the [data](notebooks/code_checklist-neurips2019.csv) and [notebook](notebooks/code_checklist-analysis.pdf) to reproduce this analysis from the post. 28 | 29 | NeurIPS 2019 repositories that had all five of these components had the highest number of GitHub stars (median of 196 and mean of 2,664 stars). 30 | 31 | We explain each item on the checklist in detail blow. 32 | 33 | #### 1. Specification of dependencies 34 | 35 | If you are using Python, this means providing a `requirements.txt` file (if using `pip` and `virtualenv`), providing `environment.yml` file (if using anaconda), or a `setup.py` if your code is a library. 36 | 37 | It is good practice to provide a section in your README.md that explains how to install these dependencies. Assume minimal background knowledge and be clear and comprehensive - if users cannot set up your dependencies they are likely to give up on the rest of your code as well. 38 | 39 | If you wish to provide whole reproducible environments, you might want to consider using Docker and upload a Docker image of your environment into Dockerhub. 40 | 41 | #### 2. Training code 42 | 43 | Your code should have a training script that can be used to obtain the principal results stated in the paper. This means you should include hyperparameters and any tricks that were used in the process of getting your results. To maximize usefulness, ideally this code should be written with extensibility in mind: what if your user wants to use the same training script on their own dataset? 44 | 45 | You can provide a documented command line wrapper such as `train.py` to serve as a useful entry point for your users. 46 | 47 | #### 3. Evaluation code 48 | 49 | Model evaluation and experiments often depend on subtle details that are not always possible to explain in the paper. This is why including the exact code you used to evaluate or run experiments is helpful to give a complete description of the procedure. In turn, this helps the user to trust, understand and build on your research. 50 | 51 | You can provide a documented command line wrapper such as `eval.py` to serve as a useful entry point for your users. 52 | 53 | #### 4. Pre-trained models 54 | 55 | Training a model from scratch can be time-consuming and expensive. One way to increase trust in your results is to provide a pre-trained model that the community can evaluate to obtain the end results. This means users can see the results are credible without having to train afresh. 56 | 57 | Another common use case is fine-tuning for downstream task, where it's useful to release a pretrained model so others can build on it for application to their own datasets. 58 | 59 | Lastly, some users might want to try out your model to see if it works on some example data. Providing pre-trained models allows your users to play around with your work and aids understanding of the paper's achievements. 60 | 61 | #### 5. README file includes table of results accompanied by precise command to run to produce those results 62 | 63 | Adding a table of results into README.md lets your users quickly understand what to expect from the repository (see the [README.md template](templates/README.md) for an example). Instructions on how to reproduce those results (with links to any relevant scripts, pretrained models etc) can provide another entry point for the user and directly facilitate reproducibility. In some cases, the main result of a paper is a Figure, but that might be more difficult for users to understand without reading the paper. 64 | 65 | You can further help the user understand and contextualize your results by linking back to the full leaderboard that has up-to-date results from other papers. There are [multiple leaderboard services](#results-leaderboards) where this information is stored. 66 | 67 | ## 🎉 Additional awesome resources for releasing research code 68 | 69 | ### Hosting pretrained models files 70 | 71 | 1. [Zenodo](https://zenodo.org) - versioning, 50GB, free bandwidth, DOI, provides long-term preservation 72 | 2. [GitHub Releases](https://help.github.com/en/github/administering-a-repository/managing-releases-in-a-repository) - versioning, 2GB file limit, free bandwidth 73 | 3. [OneDrive](https://www.onedrive.com/) - versioning, 2GB (free)/ 1TB (with Office 365), free bandwidth 74 | 4. [Google Drive](https://drive.google.com) - versioning, 15GB, free bandwidth 75 | 5. [Dropbox](https://dropbox.com) - versioning, 2GB (paid unlimited), free bandwidth 76 | 6. [AWS S3](https://aws.amazon.com/s3/) - versioning, paid only, paid bandwidth 77 | 7. [huggingface_hub](https://github.com/huggingface/huggingface_hub) - versioning, no size limitations, free bandwidth 78 | 8. [DAGsHub](https://dagshub.com/) - versioning, no size limitations, free bandwith 79 | 9. [CodaLab Worksheets](https://worksheets.codalab.org/) - 10GB, free bandwith 80 | 81 | ### Managing model files 82 | 83 | 1. [RClone](https://rclone.org/) - provides unified access to many different cloud storage providers 84 | 85 | ### Standardized model interfaces 86 | 87 | 1. [PyTorch Hub](https://pytorch.org/hub/) 88 | 2. [Tensorflow Hub](https://www.tensorflow.org/hub) 89 | 3. [Hugging Face NLP models](https://huggingface.co/models) 90 | 4. [ONNX](https://onnx.ai/) 91 | 92 | ### Results leaderboards 93 | 94 | 1. [Papers with Code leaderboards](https://paperswithcode.com/sota) - with 4000+ leaderboards 95 | 2. [CodaLab Competitions](https://competitions.codalab.org/) - with 450+ leaderboards 96 | 3. [EvalAI](https://eval.ai/) - with 100+ leaderboards 97 | 4. [NLP Progress](https://nlpprogress.com/) - with 90+ leaderboards 98 | 5. [Collective Knowledge](https://cKnowledge.io/reproduced-results) - with 40+ leaderboards 99 | 6. [Weights & Biases - Benchmarks](https://www.wandb.com/benchmarks) - with 9+ leaderboards 100 | 101 | ### Making project pages 102 | 103 | 1. [GitHub pages](https://pages.github.com/) 104 | 2. [Fastpages](https://github.com/fastai/fastpages) 105 | 106 | ### Making demos, tutorials, executable papers 107 | 108 | 1. [Google Colab](https://colab.research.google.com/) 109 | 2. [Binder](https://mybinder.org/) 110 | 3. [Streamlit](https://github.com/streamlit/streamlit) 111 | 4. [CodaLab Worksheets](https://worksheets.codalab.org/) 112 | 113 | ## Contributing 114 | 115 | If you'd like to contribute, or have any suggestions for these guidelines, you can contact us at hello@paperswithcode.com or open an issue on this GitHub repository. 116 | 117 | All contributions welcome! All content in this repository is licensed under the MIT license. 118 | -------------------------------------------------------------------------------- /notebooks/code_checklist-analysis.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "ML Code Completeness Checklist Analysis" 3 | output: 4 | pdf_document: default 5 | html_notebook: default 6 | --- 7 | 8 | This notebook contains the ML Code Completeness analysis for NeurIPS 2019 repositories. 9 | 10 | For a run & rendered version of this notebook please see: [code_checklist-analysis.pdf](code_checklist-analysis.pdf). 11 | 12 | Official repositories for NeurIPS 2019 papers fetched from: https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019 13 | 14 | A random 25% sample has been selected and manually annotated according to the 5 critera of the ML Code Completness Checklist. The result has been saved into `code_checklist-neurips2019.csv`. 15 | 16 | ```{r} 17 | library(tidyverse) 18 | library(ggplot2) 19 | library(MASS) 20 | library(RColorBrewer) 21 | 22 | t = read_csv("code_checklist-neurips2019.csv") 23 | cat("Number of rows:", nrow(t), "\n") 24 | ``` 25 | 26 | We'll focus only on Python repositories, since this is the dominant language in ML and repositories in other languages tend to have a smaller number of stars just because the community is smaller. 27 | 28 | ```{r} 29 | t = t[t$python==1,] 30 | cat("Number of rows:", nrow(t), "\n") 31 | ``` 32 | 33 | Next, we calculate the score as a sum of of individual checklist items and calculate summary stats. 34 | 35 | ```{r} 36 | t$score = rowSums(t[,4:8]) 37 | ``` 38 | 39 | We group repositories based on their score and calculate summary stats. 40 | 41 | ```{r} 42 | 43 | cat("Spread of values in each group:\n") 44 | summaries = tapply(t$stars, t$score, summary) 45 | names(summaries) = paste(names(summaries), "ticks") 46 | print(summaries) 47 | 48 | cat("Proportion of repos in each group:\n") 49 | props = tapply(t$stars, t$score, length) 50 | props = props/sum(props) 51 | names(props) = paste(names(props), "ticks") 52 | print(props) 53 | 54 | # Extract medians 55 | medians = unlist(lapply(tapply(t$stars, t$score, summary), function(x) x["Median"])) 56 | names(medians) = paste(sub(".Median", "", names(medians)), "ticks") 57 | ``` 58 | 59 | Generate summary graphs. 60 | 61 | ```{r} 62 | par(oma=c(0,1,0,1)) 63 | layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(3,2)) 64 | barplot(medians, 65 | xlab="", 66 | ylab="Median GitHub stars", ylim=c(0,200), 67 | col=brewer.pal(6, "Blues"), cex.axis=0.6, cex.names=0.6) 68 | mtext("GitHub repos grouped by number of ticks on ML code checklist", side=1, line=3, cex=0.6) 69 | 70 | pie(rev(props), col=rev(brewer.pal(6, "Blues")), cex=0.6) 71 | mtext("Proportion of repositories in each group", side=1, line=3, cex=0.6) 72 | ``` 73 | 74 | Compare using box plots. 75 | 76 | ```{r} 77 | tp = t 78 | tp$score = as.factor(tp$score) 79 | par(mfrow=c(1,1)) 80 | boxplot(stars~score, data=t, ylim=c(0,200), col=brewer.pal(6, "Blues"), 81 | xlab="ML code checklist ticks", ylab="Github stars") 82 | ``` 83 | 84 | Fit robust regression and test significance of results 85 | 86 | ```{r} 87 | print(summary(rlm(stars~training+evaluation+pretrained_model+results+dependencies, data=t))) 88 | 89 | for(i in 0:4){ 90 | cat("\nScore5 vs Score", i, "\n") 91 | print(wilcox.test(t$stars[t$score==5], t$stars[t$score==i])) 92 | } 93 | ``` 94 | 95 | ### Session information 96 | 97 | ```{r} 98 | sessionInfo() 99 | ``` 100 | -------------------------------------------------------------------------------- /notebooks/code_checklist-analysis.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paperswithcode/releasing-research-code/a5b2c85490435108e306d38c64a0d2a558f110e6/notebooks/code_checklist-analysis.pdf -------------------------------------------------------------------------------- /notebooks/code_checklist-neurips2019.csv: -------------------------------------------------------------------------------- 1 | url,stars,python,training,evaluation,pretrained_model,results,dependencies 2 | https://github.com/Microsoft/EconML,654,1,1,0,0,0,1 3 | https://github.com/cornellius-gp/gpytorch,1841,1,1,1,0,1,1 4 | https://github.com/trevorcampbell/bayesian-coresets,73,1,1,1,0,0,1 5 | https://github.com/voxelmorph/voxelmorph,652,1,1,1,1,1,0 6 | https://github.com/stanfordnlp/mac-network,406,1,1,1,0,0,1 7 | https://github.com/albietz/ckn_kernel,3,1,1,1,0,0,1 8 | https://github.com/akshaykr/oracle_cb,15,1,1,1,0,0,1 9 | https://github.com/ShangtongZhang/DeepRL,1830,1,1,1,0,1,1 10 | https://github.com/RonanFR/UCRL,21,1,1,1,0,1,1 11 | https://github.com/deepmind/lab,6082,1,1,1,0,0,1 12 | https://github.com/chris-nemeth/pseudo-extended-mcmc-code,3,1,0,0,0,0,0 13 | https://github.com/ryantibs/conformal,66,0,0,0,0,0,0 14 | https://github.com/kuleshov/audio-super-res,291,1,1,1,0,0,1 15 | https://github.com/mejbah/AutoPerf,9,1,1,0,0,0,0 16 | https://github.com/GuyLor/direct_vae,3,1,1,0,0,0,1 17 | https://github.com/pytorch/pytorch,36549,1,1,1,1,1,1 18 | https://github.com/bobye/WBC_Matlab,12,0,0,1,0,0,0 19 | https://github.com/guilgautier/DPPy,89,1,1,0,0,0,1 20 | https://github.com/liusf15/Sketching-lr,3,1,0,1,0,0,0 21 | https://github.com/tensorflow/lingvo,1914,1,1,1,1,0,1 22 | https://github.com/tensorflow/tpu,3120,1,1,1,1,0,1 23 | https://github.com/google-research/google-research,7657,1,1,1,1,1,1 24 | https://github.com/Microsoft/EdgeML,775,1,1,0,1,0,1 25 | https://github.com/alherit/kd-switch,2,1,1,1,0,0,1 26 | https://github.com/Lezcano/expRNN,59,1,1,0,0,0,0 27 | https://github.com/deep-fry/mayo,78,1,1,1,0,0,1 28 | https://github.com/submission2019/cnn-quantization,89,1,1,1,0,1,1 29 | https://github.com/lorenmt/maxl,51,1,1,0,0,0,0 30 | https://github.com/ayanc/blpa,5,1,1,0,0,0,0 31 | https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries,88,1,1,1,1,1,1 32 | https://github.com/tkusmierczyk/lcvi,2,1,0,1,0,1,1 33 | https://github.com/wushanshan/GraphLearn,1,0,0,0,0,0,0 34 | https://github.com/RixonC/DINGO,1,1,1,1,0,0,0 35 | https://github.com/wjmaddox/swa_gaussian,149,1,1,1,0,1,1 36 | https://github.com/zihangdai/reexamine-srnn,9,1,1,1,0,0,1 37 | https://github.com/QB3/CLaR,9,1,1,1,0,0,1 38 | https://github.com/google-research/disentanglement_lib,812,1,1,1,1,0,1 39 | https://github.com/pathak22/modular-assemblies,80,1,1,0,0,0,1 40 | https://github.com/steverab/failing-loudly,12,1,0,1,1,0,1 41 | https://github.com/supratikp/HOOF,11,1,1,1,0,0,1 42 | https://github.com/amzn/emukit,179,1,1,1,0,0,1 43 | https://github.com/revbucket/geometric-certificates,28,1,1,1,0,0,1 44 | https://github.com/eth-sri/eran,114,1,1,1,1,1,1 45 | https://github.com/gowerrobert/StochOpt.jl,13,1,0,1,0,0,1 46 | https://github.com/brekelma/echo,15,1,1,1,0,0,1 47 | https://github.com/Hadisalman/robust-verify-benchmark,26,1,1,1,1,1,1 48 | https://github.com/uncbiag/registration,79,0,1,0,0,0,0 49 | https://github.com/dppalomar/spectralGraphTopology,11,0,0,0,0,0,1 50 | https://github.com/emilemathieu/pvae,30,1,1,0,0,0,1 51 | https://github.com/ashafahi/free_adv_train,87,1,1,1,0,1,1 52 | https://github.com/kakaobrain/fast-autoaugment,834,1,1,1,1,1,1 53 | https://github.com/a1600012888/YOPO-You-Only-Propagate-Once,120,1,1,0,0,0,1 54 | https://github.com/FerranAlet/modular-metalearning,35,1,1,0,0,0,1 55 | https://github.com/facebookresearch/XLM,1913,1,1,0,1,1,1 56 | https://github.com/yromano/cqr,19,1,1,1,0,0,1 57 | https://github.com/muhanzhang/D-VAE,40,1,1,1,1,0,1 58 | https://github.com/Cerebras/online-normalization,29,1,1,1,0,1,1 59 | https://github.com/google-research/mixmatch,761,1,1,1,0,0,1 60 | https://github.com/Bai-Li/STN-Code,8,1,1,1,1,0,1 61 | https://github.com/EmilienDupont/augmented-neural-odes,300,1,1,1,0,0,1 62 | https://github.com/ibalazevic/multirelational-poincare,64,1,1,1,0,1,1 63 | https://github.com/polo5/ZeroShotKnowledgeTransfer,58,1,1,0,1,0,1 64 | https://github.com/mickaelChen/ReDO,107,1,1,1,1,0,0 65 | https://github.com/omitakahiro/NeuralNetworkPointProcess,10,1,1,0,0,0,0 66 | https://github.com/jfc43/robust-attribution-regularization,8,1,1,1,0,0,1 67 | https://github.com/XuezheMax/macow,32,1,1,1,0,0,1 68 | https://github.com/mmahesh/cocain-bpg-matrix-factorization,2,1,0,0,0,1,1 69 | https://github.com/facebookresearch/qmnist,182,1,1,0,0,1,0 70 | https://github.com/loeweX/Greedy_InfoMax,162,1,1,1,0,0,1 71 | https://github.com/eleurent/highway-env,260,1,1,1,0,0,1 72 | https://github.com/Khurramjaved96/mrcl,90,1,1,1,1,0,1 73 | https://github.com/fkunstner/limitations-empirical-fisher,26,1,0,1,0,0,1 74 | https://github.com/CausalML/DeepGMM,11,1,0,1,0,0,0 75 | https://github.com/edebrouwer/gru_ode_bayes,44,1,1,1,0,0,1 76 | https://github.com/mtrager/polynomial_networks,1,0,0,0,0,0,0 77 | https://github.com/f-t-s/CGD,10,0,0,0,0,0,0 78 | https://github.com/astirn/MV-Kumaraswamy,1,1,1,1,0,0,1 79 | https://github.com/xiaozhanguva/Measure-Concentration,6,1,1,1,0,0,1 80 | https://github.com/nnRNN/nnRNN_release,16,1,1,1,0,1,0 81 | https://github.com/AntreasAntoniou/Learning_to_Learn_via_Self-Critique,34,1,1,1,0,1,1 82 | https://github.com/nrgeup/controllable-text-attribute-transfer,76,1,1,0,0,0,1 83 | https://github.com/ShannonAI/glyce,203,1,1,1,1,1,1 84 | https://github.com/GiulsLu/Sinkhorn-Barycenters,4,1,0,1,0,0,1 85 | https://github.com/instadeepai/AlphaNPI,46,1,1,0,0,0,1 86 | https://github.com/max-andr/provably-robust-boosting,26,1,1,1,0,1,1 87 | https://github.com/vveitch/causal-network-embeddings,22,1,1,1,0,0,1 88 | https://github.com/harhro94/T-CorEx,12,1,1,1,0,0,1 89 | https://github.com/microsoft/petridishnn,99,1,1,1,0,0,1 90 | https://github.com/epfml/powersgd,20,1,1,0,0,0,0 91 | https://github.com/Yang7879/3D-BoNet,213,1,1,1,1,1,1 92 | https://github.com/joansj/blow,95,1,1,1,0,0,1 93 | https://github.com/claudiashi57/dragonnet,31,1,1,1,0,0,1 94 | https://github.com/ds4dm/learn2branch,75,1,1,1,0,0,1 95 | https://github.com/greydanus/hamiltonian-nn,147,1,1,1,0,1,1 96 | https://github.com/trevorcampbell/ubvi,3,1,0,0,0,0,1 97 | https://github.com/brjathu/RPSnet,28,1,1,1,0,1,1 98 | https://github.com/alaflaquiere/learn-spatial-structure,0,1,1,1,0,0,1 99 | https://github.com/TURuibo/Neuropathic-Pain-Diagnosis-Simulator,4,1,1,1,0,0,1 100 | https://github.com/PwnerHarry/Stronger_GCN,26,1,1,0,0,0,1 101 | https://github.com/kynkaat/improved-precision-and-recall-metric,34,1,0,1,0,1,1 102 | https://github.com/rr-learning/disentanglement_dataset,35,1,0,0,0,0,0 103 | https://github.com/bayesiains/nsf,101,1,1,1,0,0,1 104 | https://github.com/twankim/avod_ssn,3,1,1,1,0,0,1 105 | https://github.com/hendrycks/ss-ood,92,1,1,1,0,0,0 106 | https://github.com/Philip-Bachman/amdim-public,233,1,1,1,1,0,1 107 | https://github.com/jialinwu17/Self_Critical_VQA,21,1,1,1,1,0,1 108 | https://github.com/bknyaz/graph_attention_pool,91,1,1,1,1,1,1 109 | https://github.com/rowanz/grover,588,1,1,0,1,0,1 110 | https://github.com/songlab-cal/tape,78,1,1,1,1,1,1 111 | https://github.com/zihangdai/xlnet,5114,1,1,1,1,1,0 112 | https://github.com/charliemarx/disentangling-influence,2,1,1,1,0,0,1 113 | https://github.com/sato9hara/sgd-influence,22,1,1,1,0,1,0 114 | https://github.com/twistedcubic/que-outlier-detection,18,1,1,1,0,0,1 115 | https://github.com/allenai/dnw,114,1,1,1,1,1,1 116 | https://github.com/glivan/tensor_networks_for_probabilistic_modeling,16,1,1,0,0,0,1 117 | https://github.com/tginart/deletion-efficient-kmeans,15,1,1,1,0,0,0 118 | https://github.com/ermongroup/ncsn,167,1,1,0,1,1,1 119 | https://github.com/michaelrzhang/lookahead,89,1,0,0,0,0,0 120 | https://github.com/pfnet-research/einconv,16,1,1,0,0,0,1 121 | https://github.com/ckheaukulani/swpr,1,1,1,0,0,0,1 122 | https://github.com/rtqichen/residual-flows,122,1,1,0,1,1,1 123 | https://github.com/jiasenlu/vilbert_beta,222,1,1,1,1,1,1 124 | https://github.com/zhangxiaosong18/FreeAnchor,603,1,1,1,1,1,1 125 | https://github.com/zsyOAOA/VDNet,60,1,1,1,0,1,1 126 | https://github.com/drorsimon/CSCNet,8,1,1,1,1,0,1 127 | https://github.com/HongtengXu/s-gwl,7,1,1,0,0,0,1 128 | https://github.com/youzhonghui/gate-decorator-pruning,149,1,1,0,1,0,1 129 | https://github.com/inspire-group/robustness-via-transport,4,1,1,1,0,0,1 130 | https://github.com/tbepler/spatial-VAE,17,1,1,0,0,0,1 131 | https://github.com/Muzammal-Naseer/Cross-domain-perturbations,7,1,1,1,1,0,1 132 | https://github.com/microsoft/unilm,599,1,1,1,1,1,1 133 | https://github.com/valeoai/ConfidNet,61,1,1,1,1,0,1 134 | https://github.com/myzwisc/PPRL_NeurIPS19,4,1,1,1,0,0,0 135 | https://github.com/yihanjiang/turboae,18,1,1,1,1,0,0 136 | https://github.com/boschresearch/PA-GAN,0,1,0,0,0,0,0 137 | https://github.com/superrrpotato/Defending-Neural-Backdoors-via-Generative-Distribution-Modeling,7,1,1,1,0,0,0 138 | https://github.com/ykasten/Convergence-Rate-NN-Different-Frequencies,0,1,1,1,0,1,0 139 | https://github.com/wokas36/DFNets,4,1,1,0,0,0,1 140 | https://github.com/AnujMahajanOxf/MAVEN,15,1,1,0,0,0,1 141 | https://github.com/mblondel/projection-losses,11,1,0,0,0,0,0 142 | https://github.com/code-terminator/classwise_rationale,6,1,1,0,0,0,1 143 | https://github.com/optimass/Maximally_Interfered_Retrieval,19,1,1,1,0,0,1 144 | https://github.com/georgehc/mnar_mc,2,1,1,1,0,0,1 145 | https://github.com/BBVA/UMAL,7,1,1,1,0,1,0 146 | https://github.com/jenninglim/model-comparison-test,3,0,1,0,0,0,1 147 | https://github.com/patrick-kidger/Deep-Signature-Transforms,31,1,1,1,0,0,1 148 | https://github.com/ZJULearning/RMI,94,1,1,1,0,0,1 149 | https://github.com/shupenggui/ATMC,16,1,1,1,1,1,1 150 | https://github.com/Kipok/understanding-momentum,9,1,1,1,0,1,0 151 | https://github.com/mmkamani7/LUPA-SGD,0,1,1,0,0,0,0 152 | https://github.com/xjtushujun/meta-weight-net,76,1,1,0,0,0,1 153 | https://github.com/LCSL/dpp-vfx,5,1,1,1,0,0,0 154 | https://github.com/AilsaF/cogen_by_ais,4,0,1,0,0,0,0 155 | https://github.com/mengzaiqiao/SCAN,20,1,1,1,0,0,1 156 | https://github.com/guoyongcs/NAT,41,1,1,1,0,0,1 157 | https://github.com/PKU-AI-Edge/FEN,23,1,1,0,0,0,0 158 | https://github.com/yytzsy/SCDM,36,1,1,1,1,0,0 159 | https://github.com/BGU-CS-VIL/dtan,30,1,1,1,0,1,1 160 | https://github.com/hojonathanho/localbitsback,8,1,0,0,1,0,1 161 | https://github.com/NingMiao/KerBS,8,1,1,0,0,0,1 162 | https://github.com/TavorB/Correlated-Sequential-Halving,0,1,1,0,0,0,0 163 | https://github.com/canqin001/PointDAN,42,0,1,1,0,0,1 164 | https://github.com/researchmm/DBTNet,71,0,1,1,1,1,1 165 | https://github.com/IBM/HOTT,27,1,1,1,0,0,0 166 | https://github.com/tinyRattar/CSMRI_0325,10,1,1,1,0,0,1 167 | https://github.com/AIasd/noise_fairlearn,0,1,1,1,0,0,0 168 | https://github.com/batra-mlp-lab/vln-chasing-ghosts,2,0,1,1,0,0,1 169 | https://github.com/artsobolev/IWHVI,3,1,1,1,0,0,0 170 | https://github.com/qiaott/LeicaGAN,9,1,1,1,0,0,0 171 | https://github.com/makrout/Deep-Learning-without-Weight-Transport,10,1,1,1,0,1,1 172 | https://github.com/pfnet-research/recompute,1,1,1,1,0,0,1 173 | https://github.com/Magauiya/Extended_SURE,4,1,1,1,1,0,1 174 | https://github.com/saralajew/cbc_networks,7,1,1,1,0,0,1 175 | https://github.com/SuReLI/rats-experiments,2,1,1,1,0,0,1 176 | https://github.com/sreyas-mohan/DeepFreq,8,1,1,1,1,0,1 177 | https://github.com/megvii-model/DetNAS,190,1,1,1,1,1,1 178 | https://github.com/AllenInstitute/coupledAE,2,1,1,1,0,1,0 179 | https://github.com/raywzy/VSC,6,1,1,1,0,0,1 180 | https://github.com/facebookresearch/minirts,122,1,1,1,1,0,1 181 | https://github.com/HaohanWang/PAR,7,1,0,0,0,0,0 182 | https://github.com/bwang-ml/gapBoost,3,0,1,1,0,0,1 183 | https://github.com/yaircarmon/semisup-adv,27,1,1,1,1,0,1 184 | https://github.com/naver/r2d2,92,0,1,1,1,1,1 185 | https://github.com/BouchardLab/DynamicalComponentsAnalysis,4,1,1,1,0,0,1 186 | https://github.com/schung039/neural_manifolds_replicaMFT,5,1,0,0,0,0,1 187 | https://github.com/Chichilnisky-Lab/shah-neurips-2019,0,1,0,0,0,0,0 188 | https://github.com/abr/neurips2019,115,1,1,1,0,1,1 189 | https://github.com/thuml/Batch-Spectral-Shrinkage,3,1,1,0,0,0,1 190 | https://github.com/fanyun-sun/vGraph,17,1,1,0,0,0,1 191 | https://github.com/smair/archetypalanalysis-coreset,1,1,1,1,0,1,1 192 | https://github.com/IssamLaradji/sls,67,1,1,1,0,1,1 193 | https://github.com/jliang993/A3DMM,2,0,1,1,0,1,1 194 | https://github.com/tk1980/GaussianPooling,8,1,1,1,0,1,1 195 | https://github.com/ermongroup/MetaIRL,22,0,1,0,1,0,0 196 | https://github.com/ansuini/IntrinsicDimDeep,11,1,1,1,0,1,1 197 | https://github.com/idiap/fullgrad-saliency,54,1,1,1,0,0,1 198 | https://github.com/AliaksandrSiarohin/first-order-model,271,1,1,1,1,1,1 199 | https://github.com/mit-han-lab/deep-leakage-from-gradients,59,1,1,0,0,1,1 200 | https://github.com/HazyResearch/hgcn,97,1,1,1,0,1,1 201 | https://github.com/Cyanogenoid/dspn,44,1,1,1,1,0,1 202 | https://github.com/bogus2000/zero-shot_SGAL,0,1,1,0,0,0,0 203 | https://github.com/delegability/data,1,0,1,0,0,0,0 204 | https://github.com/vlkniaz/MAGritte,4,1,1,1,0,0,1 205 | https://github.com/umutsimsekli/sgd_first_exit_time,0,1,0,0,0,0,0 206 | https://github.com/Liusifei/UVC,100,1,1,1,1,1,1 207 | https://github.com/NikosParotsidis/Fully-dynamic_facility_location-NeurIPS2019,0,1,0,0,0,0,0 208 | https://github.com/CausalML/IntrinsicallyEfficientStableOPE,4,1,0,1,0,0,0 209 | https://github.com/nyummvc/Arbicon-Net,1,0,0,0,0,0,0 210 | https://github.com/anoneurips2019/SGD-learns-functions-of-increasing-complexity,1,1,1,1,0,0,0 211 | https://github.com/mskyt88/info-relax-sampling,0,1,0,0,0,0,0 212 | https://github.com/NVlabs/Dance2Music,234,0,0,0,0,0,0 213 | https://github.com/dragnet-org/dragnet,837,1,1,1,0,0,1 214 | https://github.com/xbpeng/mcp,3,0,0,0,0,0,0 215 | https://github.com/cvlab-yonsei/projects,52,0,0,0,0,0,0 216 | https://github.com/danmcduff/characterizingBias,1,1,1,1,0,0,1 217 | https://github.com/briancheung/superposition,15,1,0,1,0,0,1 218 | https://github.com/ermongroup/mintnet,22,1,1,1,0,0,1 219 | https://github.com/microsoft/IBAC-SNI,14,1,1,1,0,0,1 220 | https://github.com/xkianteb/ApproPO,0,1,0,0,0,0,1 221 | https://github.com/ciarapb/recovering_bandits,0,1,0,1,0,0,0 222 | https://github.com/dingyiming0427/goalgail,11,1,1,1,0,0,0 223 | -------------------------------------------------------------------------------- /templates/README.md: -------------------------------------------------------------------------------- 1 | >📋 A template README.md for code accompanying a Machine Learning paper 2 | 3 | # My Paper Title 4 | 5 | This repository is the official implementation of [My Paper Title](https://arxiv.org/abs/2030.12345). 6 | 7 | >📋 Optional: include a graphic explaining your approach/main result, bibtex entry, link to demos, blog posts and tutorials 8 | 9 | ## Requirements 10 | 11 | To install requirements: 12 | 13 | ```setup 14 | pip install -r requirements.txt 15 | ``` 16 | 17 | >📋 Describe how to set up the environment, e.g. pip/conda/docker commands, download datasets, etc... 18 | 19 | ## Training 20 | 21 | To train the model(s) in the paper, run this command: 22 | 23 | ```train 24 | python train.py --input-data --alpha 10 --beta 20 25 | ``` 26 | 27 | >📋 Describe how to train the models, with example commands on how to train the models in your paper, including the full training procedure and appropriate hyperparameters. 28 | 29 | ## Evaluation 30 | 31 | To evaluate my model on ImageNet, run: 32 | 33 | ```eval 34 | python eval.py --model-file mymodel.pth --benchmark imagenet 35 | ``` 36 | 37 | >📋 Describe how to evaluate the trained models on benchmarks reported in the paper, give commands that produce the results (section below). 38 | 39 | ## Pre-trained Models 40 | 41 | You can download pretrained models here: 42 | 43 | - [My awesome model](https://drive.google.com/mymodel.pth) trained on ImageNet using parameters x,y,z. 44 | 45 | >📋 Give a link to where/how the pretrained models can be downloaded and how they were trained (if applicable). Alternatively you can have an additional column in your results table with a link to the models. 46 | 47 | ## Results 48 | 49 | Our model achieves the following performance on : 50 | 51 | ### [Image Classification on ImageNet](https://paperswithcode.com/sota/image-classification-on-imagenet) 52 | 53 | | Model name | Top 1 Accuracy | Top 5 Accuracy | 54 | | ------------------ |---------------- | -------------- | 55 | | My awesome model | 85% | 95% | 56 | 57 | >📋 Include a table of results from your paper, and link back to the leaderboard for clarity and context. If your main result is a figure, include that figure and link to the command or notebook to reproduce it. 58 | 59 | 60 | ## Contributing 61 | 62 | >📋 Pick a licence and describe how to contribute to your code repository. 63 | --------------------------------------------------------------------------------