└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Best Paper Awards in Top Conferences of Artificial Intelligence 2 | 3 | To help students and researchers quickly find and study recent top papers in the field of artificial intelligence, we collected the award-winning papers from top conferences and created this repository. 4 | 5 | We also maintain [CV_Paper_Portal](https://hongsong-wang.github.io/CV_Paper_Portal/), [AI_arXiv_Portal](https://hongsong-wang.github.io/AI_arXiv_Portal/) and [CS_arXiv_Paper](https://hongsong-wang.github.io/CS_arXiv_Paper). Please also star these repositories [https://github.com/hongsong-wang/BestPaperAwards_AI](https://github.com/hongsong-wang/BestPaperAwards_AI), [https://github.com/hongsong-wang/CV_Paper_Portal](https://github.com/hongsong-wang/CV_Paper_Portal), [https://github.com/hongsong-wang/AI_arXiv_Portal](https://github.com/hongsong-wang/AI_arXiv_Portal) and [https://github.com/hongsong-wang/CS_arXiv_Paper](https://github.com/hongsong-wang/CS_arXiv_Paper) if they help you! 6 | 7 | ## Table of Contents 8 | - [CVPR Best Papers](#conference-on-computer-vision-and-pattern-recognition-cvpr) 9 | - [ICCV Best Papers](#international-conference-on-computer-vision-iccv) 10 | - [ECCV Best Papers](#european-conference-on-computer-vision-eccv) 11 | - [NeurIPS Best Papers](#conference-on-neural-information-processing-systems-neurips) 12 | - [ICML Best Papers](#international-conference-on-machine-learning-icml) 13 | - [ICLR Best Papers](#international-conference-on-learning-representations-iclr) 14 | - [AAAI Best Papers](#aaai-conference-on-artificial-intelligence-aaai) 15 | - [ACMMM Best Papers](#acm-multimedia) 16 | - [SIGGRAPH Best Papers](#siggraph) 17 | 18 | ## Conference on Computer Vision and Pattern Recognition (CVPR) 19 | [2025](https://cvpr.thecvf.com/Conferences/2025/BestPapersDemos), [2024](https://cvpr.thecvf.com/Conferences/2024/News/Awards), [2023](https://cvpr.thecvf.com/Conferences/2023/BestPaperAwards), [2022](https://cvpr2022.thecvf.com/cvpr-2022-paper-awards), [2021](https://cvpr2021.thecvf.com/node/329), [2020](https://cvpr2020.thecvf.com/node/817) 20 | 21 | ### Best Paper 22 | (2025) [VGGT: Visual Geometry Grounded Transformer](https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_VGGT_Visual_Geometry_Grounded_Transformer_CVPR_2025_paper.pdf), [Code](https://github.com/facebookresearch/vggt), [Supplementary](https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_VGGT_Visual_Geometry_CVPR_2025_supplemental.pdf) **Authors**: Jianyuan Wang, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, David Novotny. **Affiliations**: University of Oxford, Meta AI 23 | 24 | (2024) [Generative Image Dynamics](https://generative-dynamics.github.io/static/pdfs/GenerativeImageDynamics.pdf), [Code](https://generative-dynamics.github.io/), [Supplementary](https://generative-dynamics.github.io/static/pdfs/supp.pdf) **Authors**: Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski. **Affiliations**: Google Research 25 | 26 | (2024) [Rich Human Feedback for Text-to-Image Generation](https://arxiv.org/pdf/2312.10240), [Code](https://github.com/google-research/google-research/tree/master/richhf_18k), [Supplementary](https://arxiv.org/abs/2312.10240) **Authors**: Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, Junjie Ke, Krishnamurthy Dj Dvijotham, Katie Collins, Yiwen Luo, Yang Li, Kai J Kohlhoff, Deepak Ramachandran, Vidhya Navalpakkam. **Affiliations**: University of California, San Diego, Google Research, University of Southern California, University of Cambridge, Brandeis University 27 | 28 | (2023) [Planning-oriented Autonomous Driving](https://openaccess.thecvf.com/content/CVPR2023/papers/Gupta_Visual_Programming_Compositional_Visual_Reasoning_Without_Training_CVPR_2023_paper.pdf), [Code](https://github.com/OpenDriveLab/UniAD), [Supplementary](https://arxiv.org/pdf/2211.11559) **Authors**: Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li. **Affiliations**: Shanghai AI Laboratory, Wuhan University, SenseTime Research 29 | 30 | (2023) [Visual Programming: Compositional visual reasoning without training](https://openaccess.thecvf.com/content/CVPR2023/papers/Hu_Planning-Oriented_Autonomous_Driving_CVPR_2023_paper.pdf), [Code](https://generative-dynamics.github.io/), [Supplementary](https://arxiv.org/abs/2212.10156) **Authors**: Tanmay Gupta, Aniruddha Kembhavi. **Affiliations**: Allen Institute for AI 31 | 32 | (2022) [Learning to Solve Hard Minimal Problems](https://openaccess.thecvf.com/content/CVPR2022/papers/Hruby_Learning_To_Solve_Hard_Minimal_Problems_CVPR_2022_paper.pdf), [Code](https://github.com/petrhruby97/learning_minimal), [Supplementary](https://arxiv.org/pdf/2112.03424) **Authors**: Petr Hruby, Timothy Duff, Anton Leykin, and Tomas Pajdla. **Affiliations**: ETH Zurich, University of Washington, Georgia Institute of Technology, Czech Technical University in Prague 33 | 34 | (2021) [GIRAFFE: Representing Scenes As Compositional Generative Neural Feature Fields](https://openaccess.thecvf.com/content/CVPR2021/papers/Niemeyer_GIRAFFE_Representing_Scenes_As_Compositional_Generative_Neural_Feature_Fields_CVPR_2021_paper.pdf), [Code](https://github.com/autonomousvision/giraffe), [Supplementary](http://arxiv.org/abs/2011.12100) **Authors**: Michael Niemeyer, Andreas Geiger. **Affiliations**: Max Planck Institute for Intelligent Systems, Tubingen, University of Tubingen 35 | 36 | (2020) [Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild](https://openaccess.thecvf.com/content_CVPR_2020/papers/Wu_Unsupervised_Learning_of_Probably_Symmetric_Deformable_3D_Objects_From_Images_CVPR_2020_paper.pdf), [Code](https://github.com/elliottwu/unsup3d), [Supplementary](https://arxiv.org/pdf/1911.11130) **Authors**: Shangzhe Wu, Christian Rupprecht, Andrea Vedaldi. **Affiliations**: University of Oxford 37 | 38 | 39 | ### Best Student Paper 40 | (2025) [VGGT: Visual Geometry Grounded Transformer](https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_VGGT_Visual_Geometry_Grounded_Transformer_CVPR_2025_paper.pdf), [Code](https://github.com/facebookresearch/vggt), [Supplementary](https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_VGGT_Visual_Geometry_CVPR_2025_supplemental.pdf) **Authors**: Anagh Malik, Benjamin Attal, Andrew Xie, Matthew O'Toole, David B. Lindell. **Affiliations**: Visual Geometry Group, University of Oxford, Meta AI 41 | 42 | (2024) [Mip-Splatting: Alias-free 3D Gaussian Splatting](https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_Mip-Splatting_Alias-free_3D_Gaussian_Splatting_CVPR_2024_paper.pdf), [Code](https://niujinshuchong.github.io/mip-splatting), [Supplementary](https://openaccess.thecvf.com/content/CVPR2024/supplemental/Charatan_pixelSplat_3D_Gaussian_CVPR_2024_supplemental.pdf) **Authors**: Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, Andreas Geiger. **Affiliations**: University of Tübingen, Tübingen AI Center, ShanghaiTech University, Czech Technical, University in Prague 43 | 44 | (2024) [BIOCLIP: A Vision Foundation Model for the Tree of Life](https://openaccess.thecvf.com/content/CVPR2024/papers/Stevens_BioCLIP_A_Vision_Foundation_Model_for_the_Tree_of_Life_CVPR_2024_paper.pdf), [Code](https://imageomics.github.io/bioclip/), [Supplementary](https://arxiv.org/pdf/2311.18803) **Authors**: Samuel Stevens, Jiaman Wu, Matthew J Thompson, Elizabeth G Campolongo, Chan Hee Song, David Edward Carlyn, Li Dong, Wasila M Dahdul, Charles Stewart, Tanya Berger-Wolf, Wei-Lun Chao, Yu Su. **Affiliations**: The Ohio State University, Microsoft Research, University of California, Irvine, Rensselaer Polytechnic Institute 45 | 46 | (2023) [3D Registration with Maximal Cliques](https://openaccess.thecvf.com/content/CVPR2024/papers/Stevens_BioCLIP_A_Vision_Foundation_Model_for_the_Tree_of_Life_CVPR_2024_paper.pdf), [Code](https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques), [Supplementary](https://arxiv.org/pdf/2311.18803) **Authors**: Xiyu Zhang, Jiaqi Yang, Shikun Zhang, Yanning Zhang. **Affiliations**:Northwestern Polytechnical University 47 | 48 | (2022) [EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation](https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_EPro-PnP_Generalized_End-to-End_Probabilistic_Perspective-N-Points_for_Monocular_Object_Pose_Estimation_CVPR_2022_paper.pdf), [Code](https://github.com/tjiiv-cprg/EPro-PnP), [Supplementary](https://arxiv.org/pdf/2203.13254) **Authors**: Hansheng Chen, Pichao Wang, Fan Wang, Wei Tian, Lu Xiong, Hao Li. **Affiliations**:Tongji University, Alibaba Group 49 | 50 | (2021) [Task Programming: Learning Data Efficient Behavior Representations](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Task_Programming_Learning_Data_Efficient_Behavior_Representations_CVPR_2021_paper.pdf), [Code](https://sites.google.com/view/task-programming), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Sun_Task_Programming_Learning_CVPR_2021_supplemental.pdf) **Authors**: Jennifer J. Sun, Ann Kennedy, Eric Zhan, David J. Anderson, Yisong Yue, Pietro Perona. **Affiliations**:Caltech, Northwestern University 51 | 52 | (2020) [BSP-Net: Generating Compact Meshes via Binary Space Partitioning](https://arxiv.org/pdf/1911.06971), [Code](https://github.com/czq142857/BSP-NET-original), [Supplementary](https://arxiv.org/pdf/1911.06971) **Authors**: Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang. **Affiliations**:Simon Fraser University 53 | 54 | ### Best Paper Honorable Mention 55 | (2025) [MegaSaM: Accurate, Fast and Robust Structure and Motion from Casual Dynamic Videos](https://openaccess.thecvf.com/content/CVPR2025/papers/Li_MegaSaM_Accurate_Fast_and_Robust_Structure_and_Motion_from_Casual_CVPR_2025_paper.pdf), [Code](https://github.com/mega-sam/mega-sam), [Supplementary](https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_MegaSaM_Accurate_Fast_CVPR_2025_supplemental.pdf) **Authors**: Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, Noah Snavely. **Affiliations**: Google DeepMind, UC Berkeley, University of Michigan 56 | 57 | (2025) [Navigation World Models](https://openaccess.thecvf.com/content/CVPR2025/papers/Bar_Navigation_World_Models_CVPR_2025_paper.pdf), [Code](https://github.com/facebookresearch/nwm), [Supplementary](https://openaccess.thecvf.com/content/CVPR2025/supplemental/Bar_Navigation_World_Models_CVPR_2025_supplemental.pdf) **Authors**: Amir Bar, Gaoyue Zhou, Danny Tran, Trevor Darrell, Yann LeCun. Affiliations: FAIR at Meta, New York University, Berkeley AI Research 58 | 59 | (2025) [Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language Models](https://openaccess.thecvf.com/content/CVPR2025/papers/Deitke_Molmo_and_PixMo_Open_Weights_and_Open_Data_for_State-of-the-Art_CVPR_2025_paper.pdf), [Code](), [Supplementary](https://openaccess.thecvf.com/content/CVPR2025/supplemental/Deitke_Molmo_and_PixMo_CVPR_2025_supplemental.pdf) **Authors**: Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang, Jae Sung Park, Mohammadreza Salehi, Niklas Muennighoff, Kyle Lo, Luca Soldaini, Jiasen Lu, Taira Anderson, Erin Bransom, Kiana Ehsani, Huong Ngo, YenSung Chen, Ajay Patel, Mark Yatskar, Chris Callison-Burch, Andrew Head, Rose Hendrix, Favyen Bastani, Eli VanderBilt, Nathan Lambert, Yvonne Chou, Arnavi Chheda, Jenna Sparks, Sam Skjonsberg, Michael Schmitz, Aaron Sarnat, Byron Bischoff, Pete Walsh, Chris Newell, Piper Wolters, Tanmay Gupta, Kuo-Hao Zeng, Jon Borchardt, Dirk Groeneveld, Crystal Nam, Sophie Lebrecht, Caitlin Wittlif, Carissa Schoenick, Oscar Michel, Ranjay Krishna, Luca Weihs, Noah A. Smith, Hannaneh Hajishirzi, Ross Girshick, Ali Farhadi, Aniruddha Kembhavi. **Affiliations**: Allen Institute for AI, University of Washington, University of Pennsylvania 60 | 61 | (2025) [3D Student Splatting and Scooping](https://openaccess.thecvf.com/content/CVPR2025/papers/Zhu_3D_Student_Splatting_and_Scooping_CVPR_2025_paper.pdf), [Code](https://github.com/realcrane/3D-student-splatting-and-scooping), [Supplementary](https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhu_3D_Student_Splatting_CVPR_2025_supplemental.pdf) **Authors**: Jialin Zhu, Jiangbei Yue, Feixiang He, He Wang. **Affiliations**: University College London, UK, University of Leeds, UK, AI Centre, University College London, UK 62 | 63 | (2024) [EventPS: Real-Time Photometric Stereo Using an Event Camera](https://openaccess.thecvf.com/content/CVPR2024/papers/Roetzer_SpiderMatch_3D_Shape_Matching_with_Global_Optimality_and_Geometric_Consistency_CVPR_2024_paper.pdf), [Code](https://codeberg.org/ybh1998/EventPS), [Supplementary](https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_EventPS_Real-Time_Photometric_Stereo_Using_an_Event_Camera_CVPR_2024_paper.pdf) **Authors**: Bohan Yu, Jieji Ren, Jin Han, Feishi Wang, Jinxiu Liang, Boxin Shi. **Affiliations**: Peking University, Shanghai Jiao Tong University, The University of Tokyo, National Institute of Informatics 64 | 65 | (2024) [pixelSplat: 3D Gaussian Splats from Image Pairs 66 | for Scalable Generalizable 3D Reconstruction](https://openaccess.thecvf.com/content/CVPR2024/papers/Charatan_pixelSplat_3D_Gaussian_Splats_from_Image_Pairs_for_Scalable_Generalizable_CVPR_2024_paper.pdf), [Code](dcharatan.github.io/pixelsplat), [Supplementary](https://arxiv.org/pdf/2312.12337) **Authors**: David Charatan, Sizhe Lester Li, Andrea Tagliasacchi, Vincent Sitzmann. **Affiliations**: Massachusetts Institute of Technology, University of Toronto 67 | 68 | (2023) [DynIBaR: Neural Dynamic Image-Based Rendering](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_DynIBaR_Neural_Dynamic_Image-Based_Rendering_CVPR_2023_paper.pdf), [Code](https://github.com/google/dynibar), [Supplementary](https://arxiv.org/pdf/2211.11082) **Authors**: Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely. **Affiliations**: Google Research, Cornell Tech 69 | 70 | (2022) [Dual-Shutter Optical Vibration Sensing](https://openaccess.thecvf.com/content/CVPR2022/papers/Sheinin_Dual-Shutter_Optical_Vibration_Sensing_CVPR_2022_paper.pdf), [Code](https://imaging.cs.cmu.edu/vibration/), [Supplementary](https://imaging.cs.cmu.edu/vibration/) **Authors**: Mark Sheinin, Dorian Chan, Matthew O’Toole, and Srinivasa G. Narasimhan. **Affiliations**: Carnegie Mellon University 71 | 72 | (2021) [Exploring Simple Siamese Representation Learning](https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Exploring_Simple_Siamese_Representation_Learning_CVPR_2021_paper.pdf), [Code](https://github.com/facebookresearch/simsiam), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Exploring_Simple_Siamese_CVPR_2021_supplemental.pdf) **Authors**: Xinlei Chen, Kaiming He. **Affiliations**: Facebook AI Research 73 | 74 | (2021) [Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos](https://openaccess.thecvf.com//content/CVPR2021/papers/Jafarian_Learning_High_Fidelity_Depths_of_Dressed_Humans_by_Watching_Social_CVPR_2021_paper.pdf), [Code](https://github.com/yasaminjafarian/HDNet_TikTok), [Supplementary](https://arxiv.org/abs/2103.03319) **Authors**: Yasamin Jafarian, Hyun Soo Park. **Affiliations**: University of Minnesota 75 | 76 | (2021) [Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling](https://openaccess.thecvf.com//content/CVPR2021/papers/Lei_Less_Is_More_ClipBERT_for_Video-and-Language_Learning_via_Sparse_Sampling_CVPR_2021_paper.pdf), [Code](https://github.com/jayleicn/ClipBERT), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lei_Less_Is_More_CVPR_2021_supplemental.pdf), **Authors**: Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu. **Affiliations**: UNCChapel Hill, Microsoft Dynamics 365 AI Research 77 | 78 | (2021) [Binary TTC: A Temporal Geofence for Autonomous Navigation](https://openaccess.thecvf.com/content/CVPR2021/papers/Badki_Binary_TTC_A_Temporal_Geofence_for_Autonomous_Navigation_CVPR_2021_paper.pdf), [Code](https://github.com/NVlabs/BiTTC), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Badki_Binary_TTC_A_CVPR_2021_supplemental.pdf), **Authors**: Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu. **Affiliations**: NVIDIA, UC Santa Barbara 79 | 80 | (2021) [Real-Time High-Resolution Background Matting](https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_Real-Time_High-Resolution_Background_Matting_CVPR_2021_paper.pdf), [Code](https://github.com/PeterL1n/BackgroundMattingV2), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lin_Real-Time_High-Resolution_Background_CVPR_2021_supplemental.pdf), **Authors**: Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L. Curless, Steven M. Seitz, Ira Kemelmacher-Shlizerman. **Affiliations**: University of Washington 81 | 82 | (2020) [DeepCap: Monocular Human Performance Capture Using Weak Supervision](https://openaccess.thecvf.com/content_CVPR_2020/papers/Habermann_DeepCap_Monocular_Human_Performance_Capture_Using_Weak_Supervision_CVPR_2020_paper.pdf), [Supplementary](https://openaccess.thecvf.com/content_CVPR_2020/supplemental/Habermann_DeepCap_Monocular_Human_CVPR_2020_supplemental.pdf), **Authors**: Marc Habermann, Weipeng Xu, Michael Zollhofer, Gerard Pons-Moll, Christian Theobalt. **Affiliations**: Max Planck Institute for Informatics, Saarland Informatics Campus, Stanford University 83 | 84 | ### Best Student Paper Honorable Mention 85 | (2025) [Generative Multimodal Pretraining with Discrete Diffusion Timestep Tokens](https://openaccess.thecvf.com/content/CVPR2025/papers/Pan_Generative_Multimodal_Pretraining_with_Discrete_Diffusion_Timestep_Tokens_CVPR_2025_paper.pdf), [Code](https://github.com/selftok-team/SelftokTokenizer/), [Supplementary](https://openaccess.thecvf.com/content/CVPR2025/supplemental/Pan_Generative_Multimodal_Pretraining_CVPR_2025_supplemental.pdf) **Authors**: Kaihang Pan, Wang Lin, Zhongqi Yue, Tenglong Ao, Liyu Jia, Wei Zhao, Juncheng Li, Siliang Tang, Hanwang Zhang. **Affiliations**: Zhejiang University, Nanyang Technological University, Peking University, Huawei Singapore Research Center 86 | 87 | (2024) [SpiderMatch: 3D Shape Matching with Global Optimality and Geometric Consistency](https://openaccess.thecvf.com/content/CVPR2024/papers/Roetzer_SpiderMatch_3D_Shape_Matching_with_Global_Optimality_and_Geometric_Consistency_CVPR_2024_paper.pdf), [Code](https://github.com/paul0noah/spider-match), [Supplementary](https://openaccess.thecvf.com/content/CVPR2024/papers/Roetzer_SpiderMatch_3D_Shape_Matching_with_Global_Optimality_and_Geometric_Consistency_CVPR_2024_paper.pdf) **Authors**: Paul Roetzer, Florian Bernard. **Affiliations**: University of Bonn 88 | 89 | (2024) [Image Processing GNN: Breaking Rigidity in Super-Resolution](https://openaccess.thecvf.com/content/CVPR2025/papers/Zhu_3D_Student_Splatting_and_Scooping_CVPR_2025_paper.pdf), [Code](https://github.com/huawei-noah/Efficient-Computing/tree/master/LowLevel/IPG), [Supplementary](https://openaccess.thecvf.com/content/CVPR2024/papers/Tian_Image_Processing_GNN_Breaking_Rigidity_in_Super-Resolution_CVPR_2024_paper.pdf) **Authors**: Yuchuan Tian, Hanting Chen, Chao Xu, Yunhe Wang. **Affiliations**: National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, Huawei Noah’s Ark Lab. 90 | 91 | (2024) [Objects as volumes: A stochastic geometry view of opaque solids](https://openaccess.thecvf.com/content/CVPR2024/papers/Miller_Objects_as_Volumes_A_Stochastic_Geometry_View_of_Opaque_Solids_CVPR_2024_paper.pdf), [Code](https://github.com/realcrane/3D-student-splatting-and-scooping), [Supplementary](https://arxiv.org/pdf/2312.15406) **Authors**: Bailey Miller, Hanyu Chen, Alice Lai, Ioannis Gkioulekas. **Affiliations**: Carnegie Mellon University 92 | 93 | (2024) [Comparing the Decision-Making Mechanisms by Transformers and CNNs 94 | via Explanation Methods](https://openaccess.thecvf.com/content/CVPR2024/papers/Jiang_Comparing_the_Decision-Making_Mechanisms_by_Transformers_and_CNNs_via_Explanation_CVPR_2024_paper.pdf), [Code](https://mingqij.github.io/projects/cdmmtc/), [Supplementary](https://arxiv.org/pdf/2212.06872) **Authors**: Mingqi Jiang, Saeed Khorram, Li Fuxin. **Affiliations**: Oregon State University 95 | 96 | (2023) [DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation](https://openaccess.thecvf.com/content/CVPR2023/papers/Ruiz_DreamBooth_Fine_Tuning_Text-to-Image_Diffusion_Models_for_Subject-Driven_Generation_CVPR_2023_paper.pdf), [Code](https://dreambooth.github.io/), [Supplementary](https://arxiv.org/pdf/2208.12242) **Authors**: Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, Kfir Aberman. **Affiliations**: Google Research, Boston University 97 | 98 | (2022) [Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields](https://openaccess.thecvf.com/content/CVPR2022/papers/Verbin_Ref-NeRF_Structured_View-Dependent_Appearance_for_Neural_Radiance_Fields_CVPR_2022_paper.pdf), [Code](https://github.com/minfenli/refnerf-pl), [Supplementary](https://arxiv.org/pdf/2112.03907) **Authors**: Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan Barron, Pratul Srinivasan. **Affiliations**: Harvard University, Google Research 99 | 100 | (2021) [Less Is More: ClipBERT for Video-and-Language Learning via Sparse Sampling](https://openaccess.thecvf.com/content/CVPR2021/papers/Lei_Less_Is_More_ClipBERT_for_Video-and-Language_Learning_via_Sparse_Sampling_CVPR_2021_paper.pdf), [Code](https://github.com/jayleicn/ClipBERT), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lei_Less_Is_More_CVPR_2021_supplemental.pdf) **Authors**: Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu. **Affiliations**: UNC Chapel Hill, Microsoft Dynamics 365 AI Research 101 | 102 | (2021) [Binary TTC: A Temporal Geofence for Autonomous Navigation](https://openaccess.thecvf.com/content/CVPR2021/papers/Badki_Binary_TTC_A_Temporal_Geofence_for_Autonomous_Navigation_CVPR_2021_paper.pdf), [Code](https://github.com/NVlabs/BiTTC), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Badki_Binary_TTC_A_CVPR_2021_supplemental.pdf) **Authors**: Abhishek Badki, Orazio Gallo, Jan Kautz, Pradeep Sen. **Affiliations**: NVIDIA, UC Santa Barbara 103 | 104 | (2021) [Real-Time High-Resolution Background Matting](https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_Real-Time_High-Resolution_Background_Matting_CVPR_2021_paper.pdf), [Code](https://github.com/PeterL1n/BackgroundMattingV2), [Supplementary](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lin_Real-Time_High-Resolution_Background_CVPR_2021_supplemental.pdf) **Authors**: Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L. Curless, Steven M. Seitz, Ira Kemelmacher-Shlizerman. **Affiliations**: University of Washington 105 | 106 | (2020) [DeepCap: Monocular Human Performance Capture Using Weak Supervision](https://arxiv.org/pdf/2003.08325), [Code](https://github.com/3DFaceBody/awesome-3dbody-papers), [Supplementary](https://arxiv.org/pdf/2003.08325) **Authors**: Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt. **Affiliations**: Max Planck Institute for Informatics, Saarland Informatics Campus, Stanford University 107 | 108 | ## International Conference on Computer Vision (ICCV) 109 | 110 | ### Best Paper (Marr Prize) 111 | (2023) [Passive Ultra-Wideband Single-Photon Imaging](https://openaccess.thecvf.com/content/ICCV2023/papers/Wei_Passive_Ultra-Wideband_Single-Photon_Imaging_ICCV_2023_paper.pdf), [Code](), [Supplementary](https://openaccess.thecvf.com/content/ICCV2023/supplemental/Wei_Passive_Ultra-Wideband_Single-Photon_ICCV_2023_supplemental.zip) **Authors**: Mian Wei, Sotiris Nousias, Rahul Gulve, David B. Lindell, Kiriakos N. Kutulakos. **Affiliations**: University of Toronto 112 | 113 | (2021) [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://openaccess.thecvf.com//content/ICCV2021/papers/Liu_Swin_Transformer_Hierarchical_Vision_Transformer_Using_Shifted_Windows_ICCV_2021_paper.pdf), [Code](https://github.com/microsoft/Swin-Transformer), [Supplementary](https://openaccess.thecvf.com/content/ICCV2021/supplemental/Liu_Swin_Transformer_Hierarchical_ICCV_2021_supplemental.pdf) **Authors**: Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. **Affiliations**: Microsoft Research Asia, University of Science and Technology of China, Xian Jiaotong University, Tsinghua University 114 | 115 | (2019) [SinGAN: Learning a Generative Model from a Single Natural Image](https://openaccess.thecvf.com/content_ICCV_2019/papers/Shaham_SinGAN_Learning_a_Generative_Model_From_a_Single_Natural_Image_ICCV_2019_paper.pdf), [Code](https://github.com/tamarott/SinGAN), [Supplementary](https://openaccess.thecvf.com/content_ICCV_2019/supplemental/Shaham_SinGAN_Learning_a_ICCV_2019_supplemental.pdf), **Authors**: Tamar Rott Shaham, Tali Dekel, Tomer Michaeli. **Affiliations**: Technion, Google Research, Technion 116 | 117 | (2017) [Mask R-CNN](https://openaccess.thecvf.com/content_ICCV_2017/papers/He_Mask_R-CNN_ICCV_2017_paper.pdf), [Code](https://github.com/facebookresearch/maskrcnn-benchmark), [Supplementary](https://arxiv.org/abs/1703.06870), **Authors**: Kaiming He,Georgia Gkioxari,Piotr Dollar,Ross Girshick. **Affiliations**: Facebook AI Research (FAIR) 118 | 119 | ### Best Student Paper 120 | (2023) [Tracking Everything Everywhere All at Once](https://openaccess.thecvf.com/content/ICCV2023/papers/Wang_Tracking_Everything_Everywhere_All_at_Once_ICCV_2023_paper.pdf), [Code](https://github.com/qianqianwang68/omnimotion), [Supplementary](https://openaccess.thecvf.com/content/ICCV2023/supplemental/Wang_Tracking_Everything_Everywhere_ICCV_2023_supplemental.pdf) **Authors**: Qianqian Wang, Yen-Yu Chang, Ruojin Cai, Zhengqi Li, Bharath Hariharan, Aleksander Holynski, Noah Snavely. **Affiliations**: Cornell University, Google Research, UC Berkeley 121 | 122 | (2021) [Pixel-Perfect Structure-from-Motion with Featuremetric Refinement](https://openaccess.thecvf.com/content/ICCV2021/papers/Lindenberger_Pixel-Perfect_Structure-From-Motion_With_Featuremetric_Refinement_ICCV_2021_paper.pdf), [Code](https://github.com/cvg/pixel-perfect-sfm), [Supplementary](https://openaccess.thecvf.com/content/ICCV2021/supplemental/Lindenberger_Pixel-Perfect_Structure-From-Motion_With_ICCV_2021_supplemental.pdf) **Authors**: Philipp Lindenberger, Paul-Edouard Sarlin, Viktor Larsson, Marc Pollefeys. **Affiliations**: ETH Zurich, Microsoft 123 | 124 | ### Best Paper Honorable Mention Award 125 | (2023) [Segment Anything](https://openaccess.thecvf.com/content/ICCV2023/papers/Kirillov_Segment_Anything_ICCV_2023_paper.pdf), [Code](https://github.com/facebookresearch/segment-anything), [Supplementary](https://openaccess.thecvf.com/content/ICCV2023/supplemental/Kirillov_Segment_Anything_ICCV_2023_supplemental.pdf) **Authors**: Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. **Affiliations**: Meta AI Research, FAIR 126 | 127 | (2021) [Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields](https://openaccess.thecvf.com/content/ICCV2021/papers/Barron_Mip-NeRF_A_Multiscale_Representation_for_Anti-Aliasing_Neural_Radiance_Fields_ICCV_2021_paper.pdf), [Code](https://github.com/google/mipnerf), [Supplementary](https://openaccess.thecvf.com/content/ICCV2021/supplemental/Barron_Mip-NeRF_A_Multiscale_ICCV_2021_supplemental.pdf) **Authors**: Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan. **Affiliations**: Google, UCBerkeley 128 | 129 | (2021) [OpenGAN: Open-Set Recognition via Open Data Generation](https://openaccess.thecvf.com/content/ICCV2021/papers/Kong_OpenGAN_Open-Set_Recognition_via_Open_Data_Generation_ICCV_2021_paper.pdf), [Code](https://github.com/aimerykong/OpenGAN), [Supplementary](https://openaccess.thecvf.com/content/ICCV2021/supplemental/Kong_OpenGAN_Open-Set_Recognition_ICCV_2021_supplemental.pdf) **Authors**: Shu Kong, Deva Ramanan. **Affiliations**: Carnegie Mellon University, Argo AI 130 | 131 | (2021) [Viewing Graph Solvability via Cycle Consistency](https://openaccess.thecvf.com/content/ICCV2021/papers/Arrigoni_Viewing_Graph_Solvability_via_Cycle_Consistency_ICCV_2021_paper.pdf), [Code](https://github.com/federica-arrigoni/solvability), [Supplementary](https://openaccess.thecvf.com/content/ICCV2021/supplemental/Arrigoni_Viewing_Graph_Solvability_ICCV_2021_supplemental.pdf) **Authors**: Federica Arrigoni, Andrea Fusiello, Elisa Ricci, Tomas Pajdla. **Affiliations**: University of Trento, University of Udine, Fondazione Bruno Kessler, CIIRC CTU in Prague 132 | 133 | (2021) [Common Objects in 3D: LargeScale Learning and Evaluation of Real-life 3D Category Reconstruction](https://openaccess.thecvf.com/content/ICCV2021/papers/Reizenstein_Common_Objects_in_3D_Large-Scale_Learning_and_Evaluation_of_Real-Life_ICCV_2021_paper.pdf), [Code](https://github.com/facebookresearch/co3d), [Supplementary](https://openaccess.thecvf.com/content/ICCV2021/supplemental/Reizenstein_Common_Objects_in_ICCV_2021_supplemental.pdf) **Authors**: Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, David Novotny. **Affiliations**: Facebook AI Research, University College London 134 | 135 | ## European Conference on Computer Vision (ECCV) 136 | [2024](https://eccv.ecva.net/virtual/2024/awards_detail), [2022](https://eccv2022.ecva.net/files/2022/10/ECCV22-Awards.pdf), [2020](https://eccv2020.eu/awards/) 137 | 138 | ### Best Paper 139 | 140 | (2024) [Minimalist Vision with Freeform Pixels](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/08113.pdf), [Code](https://github.com/ColumbiaComputerVision/mincam), [Supplementary](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/08113-supp.pdf), **Authors**: Jeremy Klotz, Shree Nayar. **Affiliations**: Columbia University, New York NY, USA 141 | 142 | (2022) [On the Versatile Uses of Partial Distance Correlation in Deep Learning](), **Authors**: Xingjian Zhen, Zihang Meng, Rudrasis Chakraborty, Vikas Singh 143 | 144 | (2020) [RAFT: Recurrent All-Pairs Field Transforms for Optical Flow](), **Authors**: Zachary Teed, Jia Deng 145 | 146 | ### Best Paper Honorable Mention 147 | 148 | (2024) [Rasterized Edge Gradients: Handling Discontinuities Differentially](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/11059.pdf), [Code](https://github.com/facebookresearch/DRTK), [Supplementary](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/11059-supp.pdf), **Authors**: Stanislav Pidhorskyi, Tomas Simon, Gabriel Schwartz, He Wen, Yaser Sheikh, Jason Saragih. **Affiliations**: Reality Labs, Meta, Pittsburgh, Pennsylvania, USA 149 | 150 | (2024) [Concept Arithmetics for Circumventing Concept Inhibition in Diffusion Models](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/12206.pdf), [Supplementary](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/12206-supp.pdf), **Authors**: Vitali Petsiuk, Kate Saenko. **Affiliations**: Boston University, Boston, USA 151 | 152 | (2022) [Pose-NDF: Modelling Human Pose Manifolds with Neural Distance Fields](), **Authors**: Garvita Tiwari, Dimitrije Antic, Jan E. Lenssen, Nikolaos Sarafianos, Tony Tung, Gerard Pons-Moll 153 | 154 | (2022) [A Level Set Theory for Neural Implicit Evolution under Explicit Flows}(), **Authors**: Ishit Mehta, Manmohan Chandraker, Ravi Ramamoorthi 155 | 156 | (2020) [Towards Streaming Perception](), **Authors**: Mengtian Li, Yu-Xiong Wang, Deva Ramanan, 157 | 158 | (2020) [NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis](), **Authors**: Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng 159 | 160 | ### Award Candidate 161 | 162 | (2024) [Sapiens: Foundation for Human Vision Models](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/00529.pdf), [Code](https://github.com/facebookresearch/sapiens), [Supplementary](https://arxiv.org/abs/2408.12569), **Authors**: Rawal Khirodkar, Timur Bagautdinov, Julieta Martinez, Zhaoen Su, Austin T James, Peter Selednik, Stuart Anderson, Shunsuke Saito. **Affiliations**: Codec Avatars Lab, Meta 163 | 164 | (2024) [Integer-Valued Training and Spike-driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/04704.pdf), [Code](https://github.com/BICLab/SpikeYOLO), [Supplementary](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/04704-supp.pdf), **Authors**: Xinhao Luo, Man Yao, Yuhong Chou, Bo Xu, Guoqi Li. **Affiliations**: Institute of Automation, Chinese Academy of Sciences, Xi’an Jiaotong University 165 | 166 | (2024) [LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/01383.pdf), [Code](https://github.com/BolinLai/LEGO), [Supplementary](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/01383-supp.pdf), **Authors**:Bolin Lai, Xiaoliang Dai, Lawrence Chen, Guan Pang, James M Rehg, Miao Liu. **Affiliations**: GenAI, Meta, Georgia Institute of Technology, University of Illinois Urbana-Champaign 167 | 168 | ## Conference on Neural Information Processing Systems (NeurIPS) 169 | [2024](https://neurips.cc/virtual/2024/awards_detail), [2023](https://neurips.cc/virtual/2023/awards_detail), [2022](https://neurips.cc/virtual/2022/awards_detail), [2021](https://neurips.cc/virtual/2021/awards_detail), [2020](https://neurips.cc/virtual/2020/awards_detail) 170 | 171 | ### Best Paper 172 | (2024) [Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction](https://proceedings.neurips.cc/paper_files/paper/2024/file/9a24e284b187f662681440ba15c416fb-Paper-Conference.pdf), [Code](https://github.com/FoundationVision/VAR), **Authors**: Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, Liwei Wang **Affiliations**: Peking University, Bytedance Inc 173 | 174 | (2024) [Stochastic Taylor Derivative Estimator: Efficient amortization for arbitrary differential operators](https://proceedings.neurips.cc/paper_files/paper/2024/file/dd2eb5250696753ea37141bbd89bb569-Paper-Conference.pdf), [Code](https://github.com/sail-sg/stde), **Authors**: Zekun Shi, Zheyuan Hu, Min Lin, Kenji Kawaguchi **Affiliations**: National University of Singapore, Sea AI Lab 175 | 176 | ### Outstanding Paper 177 | 178 | (2023) [Privacy Auditing with One (1) Training Run](https://proceedings.neurips.cc/paper_files/paper/2023/file/9a6f6e0d6781d1cb8689192408946d73-Paper-Conference.pdf), **Authors**: Thomas Steinke, Milad Nasr, Matthew Jagielski **Affiliations**: Google DeepMind 179 | 180 | (2023) [Are Emergent Abilities of Large Language Models a Mirage?](https://proceedings.neurips.cc/paper_files/paper/2023/file/adc98a266f45005c403b8311ca7e8bd7-Paper-Conference.pdf), **Authors**: Rylan Schaeffer, Brando Miranda, Sanmi Koyejo **Affiliations**: Stanford University 181 | 182 | (2023) [DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models](https://openreview.net/pdf?id=kaHpo8OZw2), **Authors**: Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li 183 | 184 | (2023) [ClimSim: A large multi-scale dataset for hybrid physics-ML climate emulation](https://proceedings.neurips.cc/paper_files/paper/2023/file/45fbcc01349292f5e059a0b8b02c8c3f-Paper-Datasets_and_Benchmarks.pdf), **Authors**: Sungduk Yu, Walter Hannah, Liran Peng, Jerry Lin, Mohamed Aziz Bhouri, Ritwik Gupta, Björn Lütjens, Justus C. Will, Gunnar Behrens, Julius Busecke, Nora Loose, Charles Stern, Tom Beucler, Bryce Harrop, Benjamin Hillman, Andrea Jenney, Savannah L. Ferretti, Nana Liu, Animashree Anandkumar, Noah Brenowitz, Veronika Eyring, Nicholas Geneva, Pierre Gentine, Stephan Mandt, Jaideep Pathak, Akshay Subramaniam, Carl Vondrick, Rose Yu, Laure Zanna, Tian Zheng, Ryan Abernathey, Fiaz Ahmed, David Bader, Pierre Baldi, Elizabeth Barnes, Christopher Bretherton, Peter Caldwell, Wayne Chuang, Yilun Han, YU HUANG, Fernando Iglesias-Suarez, Sanket Jantre, Karthik Kashinath, Marat Khairoutdinov, Thorsten Kurth, Nicholas Lutsko, Po-Lun Ma, Griffin Mooers, J. David Neelin, David Randall, Sara Shamekh, Mark Taylor, Nathan Urban, Janni Yuval, Guang Zhang, Mike Pritchard 185 | 186 | ### Best Paper Runner-up 187 | 188 | (2024) [Not All Tokens Are What You Need for Pretraining](https://proceedings.neurips.cc/paper_files/paper/2024/file/3322a9a72a1707de14badd5e552ff466-Paper-Conference.pdf), [Code](https://github.com/microsoft/rho), **Authors**: Zekun Shi, Zheyuan Hu, Min Lin, Kenji Kawaguchi **Affiliations**: Xiamen University, Tsinghua University, Shanghai AI Laboratory, Microsoft 189 | 190 | (2024) [Guiding a Diffusion Model with a Bad Version of Itself](https://proceedings.neurips.cc/paper_files/paper/2024/file/5ee7ed60a7e8169012224dec5fe0d27f-Paper-Conference.pdf), [Code](https://github.com/sail-sg/stde), **Authors**: Tero Karras, Miika Aittala, Tuomas Kynkäänniemi, Jaakko Lehtinen, Timo Aila, Samuli Laine **Affiliations**: NVIDIA, Aalto University 191 | 192 | ### Outstanding Paper Runner-up 193 | 194 | (2023) [Scaling Data-Constrained Language Models](https://proceedings.neurips.cc/paper_files/paper/2023/file/9d89448b63ce1e2e8dc7af72c984c196-Paper-Conference.pdf), **Authors**: Niklas Muennighoff, Alexander Rush, Boaz Barak, Teven Le Scao, Nouamane Tazi, Aleksandra Piktus, Sampo Pyysalo, Thomas Wolf, Colin A Raffel **Affiliations**: Hugging Face, Harvard University, University of Turku 195 | 196 | (2023) [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://proceedings.neurips.cc/paper_files/paper/2023/file/a85b405ed65c6477a4fe8302b5e06ce7-Paper-Conference.pdf), **Authors**: Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, Chelsea Finn **Affiliations**: Stanford University, CZ Biohub 197 | 198 | ## International Conference on Machine Learning (ICML) 199 | 200 | [2025](https://icml.cc/virtual/2025/awards_detail), [2024](https://icml.cc/virtual/2024/awards_detail), [2023](https://icml.cc/virtual/2023/awards_detail), [2022](https://icml.cc/virtual/2022/awards_detail), [2021](https://icml.cc/virtual/2021/awards_detail), [2020](https://icml.cc/virtual/2020/awards_detail) 201 | 202 | ### Outstanding Paper 203 | 204 | ### Outstanding Position Paper 205 | 206 | ## AAAI Conference on Artificial Intelligence (AAAI) 207 | [1994–Present](https://aaai.org/about-aaai/aaai-awards/aaai-conference-paper-awards-and-recognition/) 208 | 209 | ## International Conference on Learning Representations (ICLR) 210 | [2025](https://media.iclr.cc/Conferences/ICLR2025/ICLR2025_Outstanding_Paper_Awards.pdf), [2024](https://blog.iclr.cc/2024/05/06/iclr-2024-outstanding-paper-awards/), [2023](https://iclr.cc/virtual/2023/awards_detail), [2022](https://iclr.cc/virtual/2022/awards_detail), [2021](https://iclr.cc/media/Press/ICLR_2021_Fact_Sheet.pdf) 211 | 212 | ## ACM Multimedia 213 | [2024](https://2024.acmmm.org/awards) 214 | 215 | ## SIGGRAPH 216 | [2025](https://blog.siggraph.org/2025/06/siggraph-2025-technical-papers-awards-best-papers-honorable-mentions-and-test-of-time.html/), [2024](https://blog.siggraph.org/2024/06/siggraph-2024-technical-papers-awards-best-papers-honorable-mentions-and-test-of-time.html/), [2023](https://blog.siggraph.org/2023/07/siggraph-2023-technical-papers-awards-best-papers-honorable-mentions-and-test-of-time.html/) 217 | 218 | ## SIGGRAPH Asia 219 | [2024](https://asia.siggraph.org/2024/for-the-press/press-releases/siggraph-asia-2024-award-winners/index.html) 220 | 221 | # Refrences 222 | [Best Paper Awards in Computer Science](https://jeffhuang.com/best_paper_awards/) 223 | 224 | [CVPR Best Paper Award](https://tc.computer.org/tcpami/2022/08/22/cvpr-best-paper-award/) 225 | 226 | [ICCV Paper Awards](https://tc.computer.org/tcpami/awards/iccv-paper-awards/) 227 | 228 | ## Acknowledgements 229 | 230 | 若该文档对您有所帮助,请在页面右上角点个Star⭐支持一下,谢谢! 231 | 232 | 如果转载该文档的内容,请注明出处:https://github.com/hongsong-wang/BestPaperAwards_AI 233 | 234 |
235 | 236 | This webpage is protected by copyright laws. Without the written permission of the owner of this webpage, no individual or organization shall use the content of this webpage in any form. If there is a need to reprint the content of this webpage for non-commercial purposes such as learning, research, or personal sharing, the source must be clearly indicated as "Content sourced from [https://github.com/hongsong-wang/BestPaperAwards_AI]". The content must be kept intact, and no alteration or distortion of the original text is allowed. The owner of this webpage reserves the right to pursue legal liability for any unauthorized use of the content of this webpage. If you find these works useful, please cite the above works. 237 | 238 |
239 | --------------------------------------------------------------------------------