└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Awesome-Imitation-Learning: [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) 2 | A curated list of awesome imitation learning (including inverse reinforcement learning and behavior cloning) resources, inspired by [awesome-php](https://github.com/ziadoz/awesome-php). 3 | ## Contribution 4 | Please feel free to send me [pull request](https://github.com/kristery/Awesome-Imitation-Learning/pulls) or email (kriswu8021@gmail.com) to add links. 5 | 6 | ## Table of Contents 7 | 8 | - [Papers](#papers) 9 | - [Tutorials and Talks](#tutorials-and-talks) 10 | - [Blogs](#blogs) 11 | 12 | ## Papers 13 | 14 | ### General settings 15 | * [How Resilient Are Imitation Learning Methods to Sub-optimal Experts?](https://link.springer.com/chapter/10.1007/978-3-031-21689-3_32), Gavenski et al., BRACIS 2023 16 | 17 | * [IQ-Learn: Inverse soft-Q Learning for Imitation](https://arxiv.org/abs/2106.12142), D. Garg et al., NeurIPS 2021 18 | 19 | * [Learning from Imperfect Demonstrations from Agents with Varying Dynamics](https://arxiv.org/abs/2103.05910), Z. Cao et al., ICRA 2021 20 | 21 | * [Robust Imitation Learning from Noisy Demonstrations](https://proceedings.mlr.press/v130/tangkaratt21a.html), V. Tangkaratt et al., AISTATS 2021 22 | 23 | * [Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate](https://arxiv.org/abs/2003.03709), Y. Zhang et al., ICML 2020 24 | 25 | * [Provable Representation Learning for Imitation Learning via Bi-level Optimization](https://arxiv.org/pdf/2002.10544), S. Arora et al., ICML 2020 26 | 27 | * [Domain Adaptive Imitation Learning](https://arxiv.org/pdf/1910.00105), K. Kim et al., ICML 2020 28 | 29 | * [VILD: Variational Imitation Learning with Diverse-quality Demonstrations](https://proceedings.mlr.press/v119/tangkaratt20a.html), V. Tangkaratt et al., ICML 2020 30 | 31 | * [Imitation Learning from Imperfect Demonstration](http://proceedings.mlr.press/v97/wu19a/wu19a.pdf), Y. Wu et al., ICML 2019 32 | 33 | * [A Divergence Minimization Perspective on Imitation Learning Methods](https://arxiv.org/abs/1911.02256), S. Ghasemipour et al., CoRL 2019 34 | 35 | * [Sample-Efficient Imitation Learning via Generative Adversarial Nets](https://arxiv.org/abs/1809.02064), L. Blonde et al., AISTATS 2019 36 | 37 | * [Sample Efficient Imitation Learning for Continuous Control](https://openreview.net/pdf?id=BkN5UoAqF7), F. Sasaki et al., ICLR 2019 38 | 39 | * [Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation](http://proceedings.mlr.press/v97/wang19d.html), R. Wang et al., ICML 2019 40 | 41 | * [Uncertainty-Aware Data Aggregation for Deep Imitation Learning](https://arxiv.org/abs/1905.02780), Y. Cui et al., ICRA 2019 42 | 43 | * [Goal-conditioned Imitation Learning](https://openreview.net/pdf?id=HkglHcSj2N), Y. Ding et al., ICML Workshop 2019 44 | 45 | * [Adversarial Imitation Learning from Incomplete Demonstrations](https://arxiv.org/pdf/1905.12310.pdf), M. Sun et al., 2019 46 | 47 | * [Generative Adversarial Self-Imitation Learning](https://openreview.net/forum?id=HJeABnCqKQ), J. Oh et al., 2019 48 | 49 | * [Wasserstein Adversarial Imitation Learning](https://arxiv.org/pdf/1906.08113.pdf), H. Xiao et al., 2019 50 | 51 | * [Learning Plannable Representations with Causal InfoGAN](http://papers.nips.cc/paper/8090-learning-plannable-representations-with-causal-infogan.pdf), T. Kurutach et al., NeurIPS 2018 52 | 53 | * [Self-Imitation Learning](https://arxiv.org/abs/1806.05635), J. Oh et al., ICML 2018 54 | 55 | * [Deep Q-learning from Demonstrations](https://arxiv.org/abs/1704.03732), T. Hester et al., AAAI 2018 56 | 57 | * [An Algorithmic Perspective on Imitation Learning](https://www.nowpublishers.com/article/Details/ROB-053), T. Osa et al., 2018 58 | 59 | * [Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning](https://arxiv.org/pdf/1809.02925.pdf), I. Kostrikov et al., 2018 60 | 61 | * [Universal Planning Networks](https://arxiv.org/pdf/1804.00645.pdf), A. Srinivas et al., 2018 62 | 63 | * [Learning to Search via Retrospective Imitation](https://authors.library.caltech.edu/92668/1/1804.00846.pdf), J. Song et al., 2018 64 | 65 | * [Third-Person Imitation Learning](https://arxiv.org/abs/1703.01703), B. Stadie et al., ICLR 2017 66 | 67 | * [RAIL: Risk-Averse Imitation Learning](https://arxiv.org/abs/1707.06658), A. Santara et al., NIPS 2017 68 | 69 | * [Generative Adversarial Imitation Learning](https://arxiv.org/abs/1606.03476), J. Ho et al., NIPS 2016 70 | 71 | ### Applications 72 | 73 | * [Model Imitation for Model-Based Reinforcement Learning](https://arxiv.org/abs/1909.11821.pdf), Y. Wu et al., 2019 74 | 75 | * [Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations](https://arxiv.org/pdf/1907.03976.pdf), D. Brown et al., CoRL 2019 76 | 77 | * [Task-Relevant Adversarial Imitation Learning](https://arxiv.org/abs/1910.01077), K. Zolna et al., 2019 78 | 79 | * [Multi-Task Hierarchical Imitation Learning for Home Automation](http://ronberenstein.com/papers/CASE19_Multi-Task%20Hierarchical%20Imitation%20Learning%20for%20Home%20Automation%20%20.pdf), R. Fox et al., 2019 80 | 81 | * [Imitation Learning for Human Pose Prediction](https://arxiv.org/pdf/1909.03449.pdf), B. Wang et al., 2019 82 | 83 | * [Making Efficient Use of Demonstrations to Solve Hard Exploration Problems](https://arxiv.org/abs/1909.11821), C. Gulcehre et al., 2019 84 | 85 | * [Imitation Learning from Video by Leveraging Proprioception](https://arxiv.org/pdf/1905.09335.pdf), F. Torabi et al., IJCAI 2019 86 | 87 | * [Adversarial Imitation Learning from Incomplete Demonstrations](https://arxiv.org/abs/1905.12310), M. Sun et al., 2019 88 | 89 | * [End-to-end Driving via Conditional Imitation Learning](https://arxiv.org/abs/1710.02410), F. Codevilla et al., ICRA 2018 90 | 91 | * [R2P2: A ReparameteRized Pushforward Policy for Diverse, Precise Generative Path Forecasting](https://link.springer.com/chapter/10.1007/978-3-030-01261-8_47), N. Rhinehart et al., ECCV 2018 [[blog]](http://www.cs.cmu.edu/~nrhineha/R2P2.html) 92 | 93 | * [End-to-End Learning Driver Policy using Moments Deep Neural Network](https://ieeexplore.ieee.org/abstract/document/8664869), D. Qian et al., ROBIO 2018 94 | 95 | * [Learning Montezuma’s Revenge from a Single Demonstration](https://arxiv.org/pdf/1812.03381.pdf), T. Salimans., et al., 2018 96 | 97 | * [ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst](https://arxiv.org/pdf/1812.03079.pdf), M. Bansal et al., 2018 98 | 99 | * [Video Imitation GAN: Learning control policies by imitating raw videos using generative adversarial reward estimation](https://arxiv.org/pdf/1810.01108.pdf), S. Chaudhury et al., 2018 100 | 101 | * [Query-Efficient Imitation Learning for End-to-End Autonomous Driving](https://arxiv.org/abs/1605.06450), J. Zhang et al., 2016 102 | 103 | ### Survey papers 104 | * [Imitation Learning: Progress, Taxonomies and Challenges](https://ieeexplore.ieee.org/document/9927439), Zheng et al., 2022 105 | 106 | * [Deep Reinforcement Learning: An Overview](https://arxiv.org/abs/1701.07274), Y. Li, 2018 107 | 108 | * [A Brief Survey of Deep Reinforcement Learning](https://arxiv.org/abs/1708.05866), K. Arulkumaran et al., 2017 109 | 110 | * [Imitation Learning : A Survey of Learning Methods](http://www.open-access.bcu.ac.uk/5045/1/Imitation%20Learning%20A%20Survey%20of%20Learning%20Methods.pdf), A. Hussein et al. 111 | 112 | ### Robotics and Vision 113 | * [Graph-Structured Visual Imitation](https://arxiv.org/abs/1907.05518), M. Sieb et al., CoRL 2019 114 | 115 | * [On-Policy Robot Imitation Learning from a Converging Supervisor](https://arxiv.org/abs/1907.03423), A. Balakrishna et al., CoRL 2019 116 | 117 | * [Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Reward](https://pdfs.semanticscholar.org/8186/04245973bb30ad021728149a89157b3b2780.pdf), M. Vecerik et al., 2017 118 | 119 | 120 | ### Cold-start methods 121 | * [Zero-Shot Visual Imitation](http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w40/Pathak_Zero-Shot_Visual_Imitation_CVPR_2018_paper.pdf), D. Pathak et al., ICLR 2018 122 | 123 | * [One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks](https://arxiv.org/pdf/1810.11043.pdf), T. Yu et al., 2018 124 | 125 | * [One-Shot Imitation Learning](http://papers.nips.cc/paper/6709-one-shot-imitation-learning), Y. Duan et al., NIPS 2017 126 | 127 | ### Learning multi-modal behaviors 128 | * [Learning a Multi-Modal Policy via Imitating Demonstrations with Mixed Behaviors](https://arxiv.org/pdf/1903.10304), F Hsiao et al., 2019 129 | 130 | * [Watch, Try, Learn: Meta-Learning from Demonstrations and Reward. Imitation learning](https://arxiv.org/pdf/1906.03352), A. Zhou et al., 2019 131 | 132 | * [Shared Multi-Task Imitation Learning for Indoor Self-Navigation](https://arxiv.org/pdf/1808.04503.pdf), J. Xu et al., 2018 133 | 134 | * [Robust Imitation of Diverse Behaviors](http://papers.nips.cc/paper/7116-robust-imitation-of-diverse-behaviors), Z. Wang et al., NIPS 2017 135 | 136 | * [Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets](http://papers.nips.cc/paper/6723-multi-modal-imitation-learning-from-unstructured-demonstrations-using-generative-adversarial-nets), K. Hausman et al., NIPS 2017 137 | 138 | * [InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations](http://papers.nips.cc/paper/6971-infogail-interpretable-imitation-learning-from-visual-demonstrations), Y. Li et al., NIPS 2017 139 | 140 | ### Hierarchical approaches 141 | * [Learning Compound Tasks without Task-specific Knowledge via Imitation and Self-supervised Learning](https://proceedings.icml.cc/static/paper_files/icml/2020/4283-Paper.pdf), S. Lee et al., ICML 2020 142 | 143 | * [CompILE: Compositional Imitation Learning and Execution](https://arxiv.org/pdf/1812.01483.pdf), T. Kipf et al., ICML 2019 144 | 145 | * [Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information](https://openreview.net/pdf?id=BJeWUs05KQ), M. Sharma et al., ICLR 2019 146 | 147 | * [Hierarchical Imitation and Reinforcement Learning](https://arxiv.org/abs/1803.00590), H. Le et al., ICML 2018 148 | 149 | * [OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning](https://arxiv.org/pdf/1709.06683.pdf), P. Henderson et al., AAAI 2018 150 | 151 | ### Learning from human preference 152 | * [Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences](https://arxiv.org/abs/2002.09089), D. Brown et al., ICML 2020 153 | 154 | * [A Low-Cost Ethics Shaping Approach for Designing Reinforcement Learning Agents](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16195/15869), Y. Wu et al., AAAI 2018 155 | 156 | * [Deep Reinforcement Learning from Human Preferences](http://papers.nips.cc/paper/7017-deep-reinforcement-learning-from-human-preferences), P. Christiano et al., NIPS 2017 157 | 158 | ### Learning from observations 159 | * [Self-Supervised Adversarial Imitation Learning](https://arxiv.org/abs/2304.10914) M. Juarez et al., IJCNN 2023 160 | 161 | * [MobILE: Model-Based Imitation Learning From Observation Alone](https://proceedings.neurips.cc/paper_files/paper/2021/hash/f06048518ff8de2035363e00710c6a1d-Abstract.html), Kidambi et al., NeurIPS 2021 162 | 163 | * [Off-Policy Imitation Learning from Observations](https://arxiv.org/abs/2102.13185), Zhu et al., NeurIPS 2020 164 | 165 | * [Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement](https://arxiv.org/abs/1910.04417), C. Yang et al., NeurIPS 2019 166 | 167 | * [To Follow or not to Follow: Selective Imitation Learning from Observations](https://arxiv.org/abs/1912.07670), Y. Lee et al., CoRL 2019 168 | 169 | * [Provably Efficient Imitation Learning from Observation Alone](http://proceedings.mlr.press/v97/sun19b.html), W. Sun et al., ICML 2019 170 | 171 | * [To follow or not to follow: Selective Imitation Learning from Observations](https://arxiv.org/abs/1912.07670), Y. Lee et al. 172 | 173 | * [Recent Advances in Imitation Learning from Observation](https://arxiv.org/pdf/1905.13566.pdf), F. Torabi et al., IJCAI 2019 174 | 175 | * [Adversarial Imitation Learning from State-only Demonstrations](https://dl.acm.org/citation.cfm?id=3332067), F. Torabi et al., AAMAS 2019 176 | 177 | * [Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation](https://arxiv.org/abs/1707.03374), Y. Liu et al., 2018 178 | 179 | * [Observational Learning by Reinforcement Learning](https://arxiv.org/abs/1706.06617), D. Borsa et al., 2017 180 | 181 | ### Model-based approaches 182 | * [Safe end-to-end imitation learning for model predictive control](https://arxiv.org/pdf/1803.10231.pdf), K. Lee et al., ICRA 2019 183 | 184 | * [Deep Imitative Models for Flexible Inference, Planning, and Control](https://arxiv.org/abs/1810.06544), N. Rhinehart et al., 2019 [[blog]](https://sites.google.com/view/imitative-models) 185 | 186 | * [Model-based imitation learning from state trajectories](https://openreview.net/forum?id=S1GDXzb0b¬eId=S1GDXzb0b), S. Chaudhury et al., 2018 187 | 188 | * [End-to-End Differentiable Adversarial Imitation Learning](http://proceedings.mlr.press/v70/baram17a/baram17a.pdf), N. Baram et al., ICML 2017 189 | 190 | 191 | ### Behavior cloning 192 | * [Imitating Unknown Policies via Exploration](https://arxiv.org/abs/2008.05660), G. Nathan et al., BMVC 2020 193 | 194 | * [Augmented Behavioral Cloning from Observation](https://arxiv.org/pdf/2004.13529.pdf), M. Juarez et al., IJCNN 2020 195 | 196 | * [Truly Batch Apprenticeship Learning with Deep Successor Features](https://arxiv.org/pdf/1903.10077), D. Lee et al., 2019 197 | 198 | * [SQIL: Imitation Learning via Regularized Behavioral Cloning](https://arxiv.org/pdf/1905.11108), S. Reddy et al., 2019 199 | 200 | * [Behavioral Cloning from Observation](https://arxiv.org/abs/1805.01954), F. Torabi et al., IJCAI 2018 201 | 202 | * [Causal Confusion in Imitation Learning](https://people.eecs.berkeley.edu/~dineshjayaraman/projects/causal_confusion_nips18.pdf), P. Haan et al., NeurIPS 2018 203 | 204 | 205 | ### Imitation with rewards 206 | * [Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning](https://arxiv.org/abs/1910.11956), A. Gupta et al., CoRL 2019 207 | 208 | * [Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Generative Model](https://arxiv.org/pdf/1907.02140.pdf), A. Kinose et al., 2019 209 | 210 | * [Reinforced Imitation in Heterogeneous Action Space](https://arxiv.org/pdf/1904.03438), K. Zolna et al., 2019 211 | 212 | * [Reinforcement and Imitation Learning for Diverse Visuomotor Skills](https://arxiv.org/abs/1802.09564), Y. Zhu et al., RSS 2018 213 | 214 | * [Policy Optimization with Demonstrations](http://proceedings.mlr.press/v80/kang18a.html), B. Kang et al., ICML 2018 215 | 216 | * [Reinforcement Learning from Imperfect Demonstrations](https://arxiv.org/pdf/1802.05313.pdf), Y. Gao et al., ICML Workshop 2018 217 | 218 | * [Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning](https://arxiv.org/pdf/1812.08904), G. Cruz Jr et al., 2018 219 | 220 | * [Sparse Reward Based Manipulator Motion Planning by Using High Speed Learning from Demonstrations](https://ieeexplore.ieee.org/abstract/document/8665328), G. Zuo et al., ROBIO 2018 221 | 222 | ### Multi-agent systems 223 | * [Independent Generative Adversarial Self-Imitation Learning in Cooperative Multiagent Systems](https://dl.acm.org/citation.cfm?id=3331837), X. Hao et al., AAMAS 2019 224 | 225 | * [PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings](https://arxiv.org/abs/1905.01296), N. Rhinehart et al., 2019 [[blog]](https://sites.google.com/view/precog) 226 | 227 | ### Inverse reinforcement learning 228 | * [Intrinsic Reward Driven Imitation Learning via Generative Model](https://arxiv.org/pdf/2006.15061), X. Yu et al., ICML 2020 229 | 230 | * [Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning](https://proceedings.mlr.press/v100/park20a.html), D. Park et al., CoRL 2019 231 | 232 | * [Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations](http://proceedings.mlr.press/v97/brown19a/brown19a.pdf), D. Brown et al., ICML 2019 233 | 234 | * [Learning Reward Functions by Integrating Human Demonstrations and Preferences](https://arxiv.org/pdf/1906.08928), M. Palan et al., 2019 235 | 236 | * [Learning Robust Rewards with Adversarial Inverse Reinforcement Learning](https://arxiv.org/abs/1710.11248), J. Fu et al., 2018 237 | 238 | * [Model-Free Deep Inverse Reinforcement Learning by Logistic Regression](https://link.springer.com/article/10.1007/s11063-017-9702-7), E. Uchibe, 2018 239 | 240 | * [Compatible Reward Inverse Reinforcement Learning](https://papers.nips.cc/paper/6800-compatible-reward-inverse-reinforcement-learning), A. Metelli et al., NIPS 2017 241 | 242 | * [A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models](https://arxiv.org/pdf/1611.03852.pdf), C. Finn et al., NIPS Workshop 2016 243 | 244 | * [Maximum Entropy Inverse Reinforcement Learning](https://www.aaai.org/Papers/AAAI/2008/AAAI08-227.pdf), B. Ziebart et al., AAAI 2008 245 | 246 | ### POMDP 247 | * [Learning Belief Representations for Imitation Learning in POMDPs](https://arxiv.org/pdf/1906.09510.pdf), T. Gangwani et al., 2019 248 | 249 | ### Planning 250 | * [Dyna-AIL : Adversarial Imitation Learning by Planning](https://arxiv.org/abs/1903.03234), V. Saxena et al., 2019 251 | 252 | 253 | ### Representation learning 254 | * [Visual Adversarial Imitation Learning using Variational Models](https://proceedings.neurips.cc/paper/2021/hash/1796a48fa1968edd5c5d10d42c7b1813-Abstract.html), R. Rafailov et al., NeuRIPS 2021 255 | * [An Empirical Investigation of Representation Learning for Imitation](https://openreview.net/forum?id=kBNhgqXatI), X. Chen et al., NeuRIPS 2021 256 | * [Self-Supervised Disentangled Representation Learning for Third-Person Imitation Learning](https://ieeexplore.ieee.org/abstract/document/9636363?casa_token=h9sZkw2gl3gAAAAA:jvlP3duYMpTJRczH8uHwKvqxvQDccsplK8eF_Hd1mYEa0CC6Tso7z7iWSWiwoIQmQdhTLg7f), J. Shang et al., IROS 2021 257 | * [The Surprising Effectiveness of Representation Learning for Visual Imitation](https://arxiv.org/abs/2112.01511), J. Pari et al., 2021 258 | * [Provable Representation Learning for Imitation Learning via Bi-level Optimization](http://proceedings.mlr.press/v119/arora20a.html), S. Arora et al., ICML 2020 259 | * [Causal Confusion in Imitation Learning](https://proceedings.neurips.cc/paper/2019/hash/947018640bf36a2bb609d3557a285329-Abstract.html), P. Haan et al, NeuRIPS 2019 260 | 261 | 262 | ## Tutorials and talks 263 | * [2018 ICML](https://www.youtube.com/watch?v=6rZTaboSY4k) [(Slides)](https://sites.google.com/view/icml2018-imitation-learning/) 264 | * [Imitation learning basic (National Taiwan University)](https://www.youtube.com/watch?v=rOho-2oJFeA) 265 | * [New Frontiers in Imitation Learning (2017)](https://www.youtube.com/watch?v=4PnNlvPGbUQi) 266 | * [Unity Course](https://www.youtube.com/watch?v=uiutRBXfEbg) 267 | 268 | ## Blogs 269 | * [Introduction to Imitation Learning](https://blog.statsbot.co/introduction-to-imitation-learning-32334c3b1e7a) 270 | 271 | ### Materials 272 | * [Imitation Learning](http://ciml.info/dl/v0_99/ciml-v0_99-ch18.pdf) 273 | * [CMU Imitation Learning](https://katefvision.github.io/katefSlides/immitation_learning_I_katef.pdf) 274 | * [Deep Reinforcement Learning via Imitation Learning](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=10&cad=rja&uact=8&ved=2ahUKEwi7tJ_gvI_eAhXGE7wKHV7WDoUQFjAJegQICRAC&url=https%3A%2F%2Fbcourses.berkeley.edu%2Fcourses%2F1453965%2Ffiles%2F69855652%2Fdownload%3Fverifier%3D5DYZT6niDXA1pc4fTuIndZ1tpIsJeCmcicRgcpY2%26wrap%3D1&usg=AOvVaw2HHdnf9uYaJalo6t9kv46s), S. Levine 275 | 276 | ## Licenses 277 | License 278 | 279 | [![CC0](http://i.creativecommons.org/p/zero/1.0/88x31.png)](http://creativecommons.org/publicdomain/zero/1.0/) 280 | 281 | To the extent possible under law, [Yueh-Hua Wu](https://kristery.github.io/) has waived all copyright and related or neighboring rights to this work. 282 | --------------------------------------------------------------------------------