├── .gitbook.yaml ├── .gitignore ├── CONTRIBUTE.md ├── LICENSE ├── README.md └── docs ├── README.md ├── SUMMARY.md ├── about-us ├── join-us.md └── the-people.md └── the-continualai-wiki ├── industry.md ├── introduction-to-continual-learning.md ├── media-articles.md ├── research.md ├── software-and-benchmarks.md └── tutorials-and-courses.md /.gitbook.yaml: -------------------------------------------------------------------------------- 1 | root: ./docs/ 2 | 3 | structure: 4 | readme: README.md 5 | summary: SUMMARY.md 6 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | _build* 2 | .vscode/ -------------------------------------------------------------------------------- /CONTRIBUTE.md: -------------------------------------------------------------------------------- 1 | # ContinualAI Wiki: a collaborative wiki on Continual/Lifelong Machine Learning 2 |

3 | ContinualAI logo 4 |

5 | 6 | The aim of the project is to create an open-source, collaborative wiki to provide a starting point for researchers, developers and AI enthusiasts who share an interest in Continual Learning and are willing to learn more or contribute to this field. 7 | You can find info about CL workshops, media articles, companies using CL and much more. 8 | 9 | The wiki also provides a link to a curated list of [annotated papers](https://www.continualai.org/papers/). Be sure to check it out! 10 | 11 | [Join our community](https://continualai.herokuapp.com/) on Slack to stay updated with the latest Continual Learning news. 12 | 13 | Visit the wiki → http://wiki.continualai.org/ 14 | 15 | Below, you can find instructions on **how to contribute to the wiki**! 16 | 17 | 18 | --------------------------------------------------- 19 | 20 | ## Add a new paper to the Research section 21 | ContinualAI a [curated list of CL papers](https://wiki.continualai.org/research.html). The list is maintained through a Zotero group. You can join the group and help us keeping it updated (see next section). 22 | 23 | If you don't want to join the group, you can simply open a Github issue to suggest us a new paper (or even more than one). We will take care of adding it to the wiki as soon as possible. 24 | 25 | 1. Open a new Github issue. You can use `new paper` or `new conference` tags to specify which kind of issue you are submitting. 26 | 27 | 2. Attach your bib file containing the paper you want to include in the wiki. If you don't have a bib file, just provide us with the link to the paper. The link should point to a location where paper metadata can be appropriately retrieved by common reference managers. 28 | 29 | 30 | ## Join the ContinualAI Zotero group 31 | 32 | You can give your contribution to the group by **adding new papers** or by helping **annotating the existing ones**. 33 | 34 | 1. Join our [Zotero group](https://www.zotero.org/groups/2623909/continual_learning_papers/) 35 | 36 | 2. To **add a new paper** 37 | 38 | 2.1. Add it to the group folder which best represents the paper contribution. Read some advices below if you are uncertain on this. You can add the paper from your library or directly from the paper webpage through the Zotero web browser plugin. 39 | 40 | 2.2 Make sure that at least `title`, `authors`, `item type` and `publication` are specified. The `year` must be put inside `date` field. 41 | 42 | 2.3 Also put a link to the paper in the `url` field. 43 | 44 | 3. To **annotate** an existing paper 45 | 46 | 3.1. Check the list of existing tags in `tags.csv` file. If you want to add a new tag, please add it in there and submit a Pull Request (see `Contribute to the wiki` section). 47 | 48 | 3.2. Add your tags in the `Tags` tab of Zotero. Please, remember to write the tag in square brackets e.g. `[mytag]` 49 | 50 | 3.3. Add your notes in the `Notes` tab of Zotero. 51 | 52 | Wiki admins will periodically export the bibtex to keep the list updated. In case we forgot, join the [ContinualAI Slack](https://continualai.herokuapp.com/) and complain about our behavior in the `#wiki` channel. 53 | 54 | #### Advices to add new papers in Zotero 55 | 56 | * Check if the paper already exist by using the `Citation Key` or the title in Zotero search bar. 57 | 58 | * Don't forget to add the publication venue (Journal, Proceedings...). Use `publication = arXiv` if the paper is a preprint. 59 | 60 | * CLAI Wiki uses a system based on categories. This can sometimes be limiting. In general, please consider to add the paper in the category which you consider the most relevant one. You can add the paper in at most **2** categories, if you believe that both are equally relevant. 61 | 62 | * Please, do not add new tags if a similar category already exists. 63 | 64 | ---------------------------- 65 | 66 | ## Contribute to the wiki - TO BE UPDATED WITH THE NEW WIKI INFO 67 | Adding new papers is not the only way for you to contribute. Adding new companies, workshops or other information is very easy! 68 | 69 | 70 | ## About ContinualAI 71 | 72 | **[ContinualAI](https://continualai.org)** is an open research community on the topic of Continual Learning and AI! 73 | We are a community of CL researchers and enthusiasts! Join us today **[on slack](https://continualai.herokuapp.com)**! 74 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019-2020 ContinualAI 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ContinualAI Wiki: a collaborative wiki on Continual/Lifelong Machine Learning 2 |

3 | ContinualAI logo 4 |

5 | 6 | The aim of the project is to create an open-source, collaborative wiki to provide a starting point for researchers, developers and AI enthusiasts who share an interest in Continual Learning and are willing to learn more or contribute to this field. 7 | You can find info about CL workshops, media articles, companies using CL and much more. 8 | 9 | We also provide a curated list of annotated papers in a [separate repository](https://github.com/ContinualAI/continual-learning-papers), be sure to check it out! 10 | 11 | [Join our community](https://continualai.herokuapp.com/) on Slack to stay updated with the latest Continual Learning news. 12 | Visit the wiki → http://wiki.continualai.org/ 13 | 14 | Below, you can find instructions on **how to contribute to the wiki**! 15 | 16 | --------------------------------------------------- 17 | 18 | ## Contribute to the wiki 19 | 20 | - Fork the repository 21 | - Modify the MD files in the `docs/` folder. 22 | - Submit a Pull Request! 23 | ---------------------------- 24 | 25 | ## About ContinualAI 26 | 27 | **[ContinualAI](https://continualai.org)** is an open research community on the topic of Continual Learning and AI! 28 | We are a community of CL researchers and enthusiasts! Join us today **[on slack](https://continualai.herokuapp.com)**! 29 | -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- 1 | # Welcome to ContinualAI Wiki 2 | 3 | Humans have the extraordinary ability to _learn continually_ from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning.‌ 4 | 5 | One of the grand goals of AI is building an artificial “Continual Learning” agent that constructs a sophisticated understanding of the world from its own experience through the _autonomous incremental development_ of ever more complex skills and knowledge.‌ 6 | 7 | The aim of **ContinualAI Wiki** is to create an open-source, collaborative wiki to provide a starting point for researchers, developers and AI enthusiasts who share an interest in Continual Learning and are willing to learn more or contribute to this field. Join us now and help us improving it and keeping it up to date! 8 | 9 | {% hint style="info" %} 10 | Interested in learning more about CL? Check out our official open-access course [Continual Learning: On Machines that can Learn Continually](https://course.continualai.org)! 11 | {% endhint %} 12 | -------------------------------------------------------------------------------- /docs/SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Table of contents 2 | 3 | * [Welcome to ContinualAI Wiki](README.md) 4 | 5 | ## Wiki contents 6 | 7 | * [Introduction to Continual Learning](the-continualai-wiki/introduction-to-continual-learning.md) 8 | * [Research](the-continualai-wiki/research.md) 9 | * [Industry](the-continualai-wiki/industry.md) 10 | * [Software and Benchmarks](the-continualai-wiki/software-and-benchmarks.md) 11 | * [Tutorials and Courses](the-continualai-wiki/tutorials-and-courses.md) 12 | * [Media Articles](the-continualai-wiki/media-articles.md) 13 | 14 | ## Continual Learning papers 15 | 16 | * [GitHub List + Bibtex](https://github.com/ContinualAI/continual-learning-papers) 17 | 18 | ## ABOUT US 19 | 20 | * [The People](about-us/the-people.md) 21 | * [Join us!](about-us/join-us.md) 22 | * [Slack](https://join.slack.com/t/continualai/shared\_invite/enQtNjQxNDYwMzkxNzk0LTBhYjg2MjM0YTM2OWRkNDYzOGE0ZTIzNDQ0ZGMzNDE3ZGUxNTZmNmM1YzJiYzgwMTkyZDQxYTlkMTI3NzZkNjU) 23 | * [Email](mailto:contact@continualai.org) 24 | -------------------------------------------------------------------------------- /docs/about-us/join-us.md: -------------------------------------------------------------------------------- 1 | # Join us! 2 | 3 | **ContinualAI** is non-profit research organization and an open community of Continual/Lifelong Learning researchers and enthusiasts started in January 2018 by [Vincenzo Lomonaco](http://vincenzolomonaco.com/) and made possible with the early contribution of [Keiland Cooper](http://kwcooper.xyz/). 4 | 5 | We are always looking for new people to join the ContinualAI effort to build open source projects. Feel free to join us on [Slack](https://join.slack.com/t/continualai/shared_invite/enQtNjQxNDYwMzkxNzk0LTBhYjg2MjM0YTM2OWRkNDYzOGE0ZTIzNDQ0ZGMzNDE3ZGUxNTZmNmM1YzJiYzgwMTkyZDQxYTlkMTI3NzZkNjU) or to [contact us](mailto:contact@continualai.org) and ask more information. 6 | 7 | ## Contribute to the wiki 8 | 9 | If you appreciated this wiki and want to give your contribution, visit [Github](https://github.com/ContinualAI/wiki) and check out the wiki open source project! You can help us in maintaining the wiki updated by submitting new issues, suggesting us new contents and much more. See you there! 10 | 11 | -------------------------------------------------------------------------------- /docs/about-us/the-people.md: -------------------------------------------------------------------------------- 1 | # The People 2 | 3 | ## Wiki Contributors 4 | 5 | We are always happy to add new members and get some help for improving our wiki or simply discuss about about CL, so join us today on [slack](https://join.slack.com/t/continualai/shared_invite/enQtNjQxNDYwMzkxNzk0LTBhYjg2MjM0YTM2OWRkNDYzOGE0ZTIzNDQ0ZGMzNDE3ZGUxNTZmNmM1YzJiYzgwMTkyZDQxYTlkMTI3NzZkNjU)! 6 | 7 | * [How to Contribute](https://github.com/ContinualAI/wiki#how-to-contribute-to-the-wiki): list of instructions to read before contributing to this project! 8 | 9 | ### Principal Maintainer 10 | 11 | [Andrea Cossu](https://andreacossu.github.io/): _He is a PhD Student in Data Science at Scuola Normale Superiore and University of Pisa, working under the supervision of Davide Bacciu. He is a member of the Pervasive AI Lab at University of Pisa. His research is currently focused on the study of Continual Learning with applications to Recurrent Neural Networks and sequential data processing._ 12 | 13 | ### _Other Contributors_ 14 | 15 | Here we list the main contributors \(alphabetical order\): 16 | 17 | * Ali Ayub 18 | * Fabio Cermelli 19 | * Matthias De Lange 20 | * Timothée Lesort 21 | * Bogdan Ivanyuk-Skulskiy \([papers graph](https://www.continualai.org/papers/) contribution\) 22 | * Vincenzo Lomonaco 23 | * Michael Milstead 24 | * Martin Mundt 25 | * Simon Ouellette 26 | * German Parisi 27 | * Lorenzo Pellegrini 28 | * Gido Van de Ven 29 | 30 | -------------------------------------------------------------------------------- /docs/the-continualai-wiki/industry.md: -------------------------------------------------------------------------------- 1 | # Industry 2 | 3 | Nowadays, more and more companies begin to embrace the power of Machine Learning \(ML\) for different business processes. Most of the time, ML models are trained off-line on very big and representative training sets and are then frozen after deployment. However, this makes impossible for the model to improve and adapt to new circumstances if exposed to new training data. 4 | 5 | Continual Learning enable both scalability and adaptation, two essential factor for many ML and AI systems. In this page you will find everything related to the Industry applications of CL. 6 | 7 | ## Current Solutions 8 | 9 | In this section we provide a list of companies that exploit Continual Learning approaches: 10 | 11 | * [Heuritech](https://www.heuritech.com/) is a French startup that applies state-of-the-art deep learning models for fashion. They recognize fine-grained garments and fashion trends on the internet. They regularly have to add novel clothing \(shoes, dress, etc.\) to their knowledge base, therefore they are currently doing research with a partnership with Sorbonne Université to develop novel Continual Learning methods. 12 | * [Neurala](https://www.neurala.com/) uses deep learning neural network software that makes smart products \(drones\) more autonomous and useful. In particular, Neurala Lifelong Deep Neural Networks \(Lifelong-DNN™\) enable incremental learning of new objects on the fly, without the power of a server located in the cloud. Neurala accomplishes this by [combining different neural network architectures](https://www.neurala.com/press-releases/edge-deep-learning-without-cloud)\). 13 | * [Continual](https://continual.ai/) is a company providing frameworks and tools to continuously train models on a local infrastructure. 14 | * [Amazon-Comprehend](https://aws.amazon.com/comprehend/) analyzes text and tells you what it finds, starting with the language, from Afrikans to Yoruba, with 98 more in between. It can identify different types of entities \(people, places, brands, products, and so forth\), key phrases, sentiment \(positive, negative, mixed, or neutral\), and extract key phrases, all from text in English or Spanish. Finally, Comprehend‘s topic modeling service extracts topics from large sets of documents for analysis or topic-based grouping 15 | * [IBM-Watson](https://datascience.ibm.com/docs/content/analyze-data/ml-continuous-learning.html) has embraced the phylosophy of Continual Learning by providing automated monitoring of model performance, retraining, and redeployment to ensure prediction quality. IBM Watson allow data scientists and analysts to quickly build and prototype models, to monitor deployments, and to learn over time as more data become available. 16 | * [Cogitai](https://www.cogitai.com/) is concerned about building artificial intelligences \(AIs\) that learn continually from interaction with the real world. Commerical applications and solutions are designed for learning knowledge and actions from experience by relying on continual-learning AI approaches. 17 | * [DeepMind](https://deepmind.com/) is one of the world leader in artificial intelligence research. DeepMind reasearch has recently showed how to develop programs that can learn to solve complex problem without needing to be taught how. In this context, continual learning approaches have been applied to Reinforcement Learning methods. 18 | 19 | ## Future Applications 20 | 21 | The future AI systems will rely on continual learning as opposed to algorithms that are trained offline. There are many applications and scenarios where continual learning already plays a central role or can be exploited for achieving better results. Here we provide a list of applications where Continual Learning will make the difference: 22 | 23 | * Robotics deals with use of robots where learning approaches are generally focused on discrete, e.g. single-task, learning events. However, in many applications robots need to be able to react to unexpected events and then update their models/policies to include the just encountered data points. Nowadays, Reinforcement Learning techniques are starting to provide robots and agents with such capabilities. 24 | * Object Recognition applications aims to recognize different categories of objects in an image. Incremental learning of new categories of objects, without forgetting previous ones, is extremely important for building lifelong autonomous systems. 25 | 26 | -------------------------------------------------------------------------------- /docs/the-continualai-wiki/introduction-to-continual-learning.md: -------------------------------------------------------------------------------- 1 | # Introduction to Continual Learning 2 | 3 | Here, you will find an _informal_ introduction to Continual Learning. For a comprehensive overview of the field, have a look at our [research section](research.md), in which you can find in-depth surveys on the topic together with specific approaches and techniques to address the Continual Learning challenge. 4 | 5 | ## What is Continual / Lifelong Learning? 6 | 7 | Continual Learning, also known as Lifelong learning, is built on the idea of learning continuously about the external world in order to enable the autonomous, incremental development of ever more complex skills and knowledge. 8 | 9 | A Continual learning system can be defined as _an adaptive algorithm capable of learning from a continuous stream of information, with such information becoming progressively available over time and where the number of tasks to be learned \(e.g. membership classes in a classification task\) are not predefined. Critically, the accommodation of new information should occur without catastrophic forgetting or interference._ 10 | Parisi et al. _Continual Lifelong Learning with Neural Networks: a review_, 2019. 11 | 12 | Hence, in the Continual Learning scenario, a learning model is required to incrementally build and dynamically update internal representations as the distribution of tasks dynamically changes across its lifetime. Ideally, part of such internal representations will be general and invariant enough to be reusable across similar tasks, while another part should preserve and encode task-specific representations. 13 | 14 | ## The Deep Learning approach to Learning 15 | 16 | Deep Learning is a subset of Machine Learning in which models - artificial neural networks, in most of the cases - learn to map input to output by building an adaptive, internal hierarchical representation. Artificial neural networks are made of units linked together by weighted connections. The learning process is defined by changing the value of the weights in order to minimize a cost function which measures how much the output produced by the model differs from the expected outcome. 17 | 18 | Such learning process is adaptive, meaning that it only requires a \(possibly large\) set of data from which to learn and a suitable cost function to specify the type of task to be performed. 19 | 20 | Decades of research showed that Deep Learning models are able to accomplish a range of different tasks, often surpassing human-level performance. They are widespread in several fields like language translation, self-driving cars, bio-medical applications, stock prediction in finance… just to name a few! 21 | 22 | The astonishing accomplishments made by Deep Learning are confined to a specific task: without additional training, a Deep Learning neural network which is able to beat the \(human\) world champion at the game of Go will not be able to drive a car or to translate from English to French. However, nothing prevents us from continuing to train the network on new tasks. 23 | 24 | What will be the behavior of the network at the end of the new learning phase? This question is at the heart of the Continual Learning field. 25 | 26 | ## The Catastrophic Forgetting phenomenon 27 | 28 | When learning in a Continual Learning environment, the model is exposed to a streaming of inputs coming from different distributions, representing different tasks. At each learning step, the model will have to adapt in order to meet the expected behavior. 29 | 30 | A well-known problem in learning multiple tasks sequentially is the _catastrophic forgetting_ phenomenon, which can be concisely summarized in one sentence: _the process of learning new knowledge quickly disrupts previously acquired information_. The catastrophic forgetting \(or simply forgetting\) is the main problem faced by Continual Learning algorithms. 31 | 32 | Unfortunately, _all_ connectionist models are subjected to Catastrophic Forgetting. The consequence being that neural networks are not suitable to learn in Continual Learning environments, since their performance on previous tasks will degrade very quickly. 33 | 34 | Catastrophic Forgetting can be characterized by looking at the _stability-plasticity_ dilemma: a learning model has to be plastic enough to learn new information, but it has also to be stable to preserve internal knowledge. This trade-off is never satisfied for traditional neural networks, where the plasticity easily overpowers the stability. 35 | 36 | ## Beyond forgetting 37 | 38 | Even if Catastrophic Forgetting is the main focus of Continual Learning, there are other aspects that need to be considered when learning continuously. 39 | 40 | Preserving old knowledge is important not only to perform well on previous tasks. It can also be used to perform better on incoming tasks. This feature, called _transfer learning_, enables Continual Learning algorithms to require only few examples of a new tasks to master it. 41 | 42 | Another interesting opportunity when learning sequentially is the benefit that a previously learned task can receive new knowledge from subsequent learning. Such _backward transfer_ can positively affect the performance of a Continual Learning algorithm on previous tasks, without seeing any further examples from it. It is needless to say that, without a method that properly mitigate forgetting, no backward transfer is possible. 43 | 44 | ## Biological perspective 45 | 46 | The main evolutionary advantage of learning is to rapidly change an organism’s behavior to succeed in a dynamic environment. These experience-driven alterations occur in much shorter time scales than genetic evolution can adapt to, allowing a single organism to persist in more situations than those whose behavior is fixed. Because of this, experience driven alterations are pervasive throughout the animal kingdom, from complex vertebrates to single celled organisms. The reason for this is simple: learned responses or acquired information from experiences help the chances of an organism’s success as opposed to a randomly selected behavior. 47 | 48 | While some learning occurs only once, such as imprinting in ducklings, a majority occurs continuously throughout an organism’s lifespan. As the climate, ecological niche, food supply, or other factors alter, an organism may alter its response as well. Moreover, this may occur multiple times throughout an organism’s life. For example, a scavenging animal may learn the location of a food supply, returning multiple times to that location. When the source is exhausted, then the animal must learn to not only refrain from returning to the location, but also to learn to find a new source. This sequence may happen multiple times throughout an animal’s life, a reality of the scarcity of food. 49 | 50 | ### Simple Learning 51 | 52 | Throughout the long studies of animal learning since the late 18th century, a large literature of general rules have been revealed. These universal laws include multiple scales and degrees of complexity, and may be pervasive throughout species of localized to only a few. For example, a quite common form of learning is sensitization and habituation, among the most basic forms. This results in the animal’s increased or reduced response to a given stimulus after repeated exposures. This occurs throughout the animal kingdom, from humans to single cells. For example, if you’re walking in a dark room and someone startles you, your reaction is likely to be more exaggerated than if you were startled in a well lit room. This is an example of sensitization, as the dark room exaggerates your response. The reciprocal of this can be observed in prairie dogs. Upon hearing the sound of approaching human footsteps, the animals retreat into their holes. As this occurs multiple times, the prairie dogs learn the footsteps are no longer a threat, thus no longer retreating once heard again. These phenomena can be observed at the single cell level as well. Differentiated PC12 cells secrete decreasing amounts of norepinephrine as they are repetitively stimulated by concentrations of a potassium ion. These simple learning rules persist throughout an organism’s lifespan, as it experiences different types and degrees of stimuli. Alone, these simple rules can produce an astounding degree of complex behavior, but they are even more impressive when coupled with other mechanisms. 53 | 54 | ### Associative Learning 55 | 56 | Simple modulation of response alone may not be suitable for more complex organisms and environments. A finer degree of acuity may be demanded. Thus, evolution has produced other learning mechanisms designed to parse the causal structure of the environment, as well as to differentiate between individual features and stimuli. This type of learning is known as associative, as the animal links together structured information, and fits two main classes: classical and instrumental conditioning. Classical conditioning was made famous by Ivan Pavlov and his dogs, and includes an animal’s ability to link novel stimuli with responses, as such in the classical example of the ringing bell, a conditioned stimulus, resulting in the dog salivating. Other uses have been exhibited as well. Farmers were killing lions that were preying on their cattle. To deter the cats from the cattle, conservation specialists gave the lions cattle meat which would make them safely sick. This conditioned the lions away from the meat, and the number of cattle killed was drastically reduced. Conditioning of this sort could easily be noticed in the wild, and will continue throughout the organism’s lifetime, as more and more associations are built. 57 | 58 | When classical conditioning is observed from the perspective of a longer term scale, complex interactions between the conditioning of the animal arise. While many of the rules governing these complex interactions are unknown, some have been uncovered. For example, some stimuli that are experienced but not linked to a response will show a slower learning curve when they are linked to a response, known as latent inhibition. Prior learning of a stimulus and response pair can also inhibit future stimuli from being learned, known as blocking. Organisms may also exhibit a response to novel stimuli as well, known as conditioning generalization. 59 | 60 | Organisms may not have these events structured in such a way where the reward is immediately evident, but rather will have to use trial and error until a reward is found. For example, an octopus may try several different actions to open a jar with a crab trapped inside, eventually succeeding by twisting with its arms. When given a new jar, the octopus will open it in fewer attempts, hinting at learning mechanisms. This type of learning is known as instrumental conditioning. Organisms use this type of learning often in their environment, attempting to parse out hidden rewards that cannot be known. Many successes in machine learning have also leveraged it as well. The famous Q learning algorithm by Watkins was designed with this type of learning in mind, then paired with deep neural networks produced the general Atari playing algorithm. 61 | 62 | Associative pairs require repeated reinforcement to persist. If an organism learns that an area may be unsafe, but repeatedly sees it as safe afterwards, then the prior pairing will fade. However, if the stimulus reappears, then the organism will learn much more quickly than the first pairing, hinting that pairings never fully fade. 63 | 64 | -------------------------------------------------------------------------------- /docs/the-continualai-wiki/media-articles.md: -------------------------------------------------------------------------------- 1 | # Media Articles 2 | 3 | While not yet in its peak of media attention, Continual Learning has repeatedly appeared in multiple sources. In this page we try to cover the most relevant press articles on the subject. 4 | 5 | * [Quanta Magazine - The Computer Scientist Challenging AI to Learn Better](https://www.quantamagazine.org/the-computer-scientist-trying-to-teach-ai-to-learn-like-we-do-20220802/) 6 | * [Toward continual learning systems](https://gantry.io/blog/toward-continual-learning-systems/) 7 | * [Towards Adaptive AI with Continual Learning](https://ai.kuleuven.be/stories/post/2021-05-10-continual-learning/) 8 | * [Why Neural Networks Forget, and Lessons from the Brain](https://numenta.com/blog/2021/02/04/why-neural-networks-forget-and-lessons-from-the-brain) 9 | * [Machine learning is going real-time](https://huyenchip.com/2020/12/27/real-time-machine-learning.html) 10 | * [Lifelong Learning with Bayesian networks](https://argmax.ai/blog/lll/) 11 | * [Continuum: A Data Loader for Continual Learning](https://medium.com/continual-ai/continuum-a-data-loader-for-continual-learning-bb45ce9ef0ef) 12 | * [Why Continual Learning is the key towards Machine Intelligence](https://medium.com/@vlomonaco/why-continuous-learning-is-the-key-towards-machine-intelligence-1851cb57c308) 13 | * [Why continuous learning is key to AI](https://www.oreilly.com/ideas/why-continuous-learning-is-key-to-ai) 14 | * [DARPA Seeking AI That Learns All the Time](https://spectrum.ieee.org/cars-that-think/robotics/artificial-intelligence/darpa-seeking-ai-that-can-learn-all-the-time) 15 | * [Enabling Continual Learning in Neural Networks](https://deepmind.com/blog/enabling-continual-learning-in-neural-networks/) 16 | * [Neurala Announces Lifelong-DNN™ for Self-Driving Cars, Drones, Toys and Other Machines: Deep Learning That Can Learn on the Device Without Using the Cloud](https://www.neurala.com/press-releases/edge-deep-learning-without-cloud) 17 | * [Lifelong Learning in Facebook CommAI project](https://research.fb.com/downloads/commai/) 18 | * [4 ways to enable Continual learning into Neural Networks](https://hub.packtpub.com/4-ways-enable-continual-learning-neural-networks/) 19 | * [What No One Tells You About Real-Time Machine Learning](https://www.kdnuggets.com/2015/11/petrov-real-time-machine-learning.html) 20 | * [Sony wants to push AIs to learn from their own experiences](https://www.engadget.com/2016/05/17/sony-ai-continual-learning/) 21 | * [Cogitai’s Mark Ring – Going Beyond Reinforcement Learning](https://www.techemergence.com/cogitais-mark-ring-going-beyond-reinforcement-learning/) 22 | * [Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning](https://www.darpa.mil/news-events/2018-05-03) 23 | * [Lifelong (machine) learning: how automation can help your models get smarter over time](https://www.ibm.com/blogs/bluemix/2017/10/lifelong-machine-learning-automation-can-help-models-get-smarter-time/) 24 | * [The Next-Generation AI Brain: How AI Is Becoming More Human](https://www.forbes.com/sites/forbestechcouncil/2018/04/09/the-next-generation-ai-brain-how-ai-is-becoming-more-human/2/) 25 | * [AI Edges to Factory Floor (“…incremental learning by 2022”)](https://www.eetimes.com/document.asp?doc\_id=1333973) 26 | * [Variational Continual Learning with Generative Replay](https://towardsdatascience.com/variational-continual-learning-with-generative-replay-bfd43464d250) 27 | * [Guiding Forgetful Machines](https://towardsdatascience.com/guiding-forgetful-machines-72d1b8949138) 28 | * [IBM’s Quest to Solve the Continual Learning Problem and Build Neural Networks Without Amnesia](https://towardsdatascience.com/ibms-quest-to-solve-the-continual-learning-problem-and-build-neural-networks-without-amnesia-7ca70a41d07f) 29 | -------------------------------------------------------------------------------- /docs/the-continualai-wiki/research.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A useful list of conference workshops and research programs on Continual 4 | Learning 5 | --- 6 | 7 | # Research 8 | 9 | ## Conference Workshops 10 | 11 | * [International Workshop on Continual Semi-Supervised Learning \(CSSL\) at IJCAI 2021](https://sites.google.com/view/sscl-workshop-ijcai-2021/) 12 | * [CLVISION workshop \(2nd edition\) at CVPR 2021](https://sites.google.com/view/clvision2021/) 13 | * [WMT20 Lifelong Learning for Machine Translation Shared Task](http://www.statmt.org/wmt20/lifelong-learning-task.html) 14 | * [Ro-man 2020 Workshop on Lifelong Learning for Long-term Human-Robot Interaction \(LL4LHRI\)](https://sites.google.com/view/ll4lhri2020/objectives-and-challenges) 15 | * [ICML 2020 Workshop on Lifelong Learning](https://lifelongml.github.io/) 16 | * [CVPR 2020 Workshop on Continual Learning in Computer Vision](https://sites.google.com/view/clvision2020) 17 | * [Cosyne 2019 Continual Learning](http://www.cosyne.org/c/index.php?title=Workshops2019_learning) 18 | * [ICML 2019 Workshop on Multi-Task and Lifelong Reinforcement Learning](https://sites.google.com/corp/view/mtlrl/home) 19 | * [ICML 2019 Adaptive and Multitask Learning: Algorithms & Systems](https://www.amtl-workshop.org/) 20 | * [ICML 2018 Lifelong RL workshop](https://sites.google.com/corp/view/llarla2018/home) 21 | * [NeurIPS 2018 Workshop on Meta-Learning](http://metalearning.ml/2018/) 22 | * [NeurIPS 2018 Workshop on Continual Learning](https://sites.google.com/view/continual2018/home) 23 | 24 | ## Research Programs 25 | 26 | * [DARPA Lifelong Learning Machines \(L2M\) program](http://www.darpa.mil/news-events/2017-03-16) 27 | * [Bayes Duality project](https://bayesduality.github.io/) 28 | * [European H2020 DREAM project](http://www.robotsthatdream.eu/) 29 | * [CAREER: Brain-inspired Methods for Continual Learning of Large-scale Vision and Language Tasks](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2047556&HistoricalAwards=false) 30 | * [Shared-Experience Lifelong Learning \(ShELL\)](https://sam.gov/opp/1afbf600f2e04b26941fad352c08d1f1/view) 31 | * [ERC Advanced Grant "KeepOnLearning - Beyond solving static datasets: Deep learning from streaming data"](https://www.kuleuven.be/english/research/EU/p/horizon2020/es/erc/keeponlearning) 32 | 33 | -------------------------------------------------------------------------------- /docs/the-continualai-wiki/software-and-benchmarks.md: -------------------------------------------------------------------------------- 1 | # Software and Benchmarks 2 | 3 | One of the most important objectives of the Continual AI project is to provide easy access to Continual Learning both in terms of didactic materials and open software/datasets for business/research. In this page we will try to collect every open-source project related to Continual Learning. 4 | 5 | ## Software 6 | 7 | * [Avalanche](https://avalanche.continualai.org/): an End-to-End Library for Continual Learning, developed and maintained by [ContinualAI](https://www.continualai.org/). 8 | * [Sequoia Library](https://github.com/lebrice/Sequoia): A Playground for research at the intersection of Continual, Reinforcement, and Self-Supervised Learning. 9 | * [Continuum](https://github.com/Continvvm/continuum): Continuum is a Python library \(written with PyTorch\) for loading of datasets in Continual Learning. It supports many datasets and most CL scenarios \(NC, NI, NIC…\). 10 | * [NORB sequencer](https://github.com/vlomonaco/norb-creator): Java application \(with GUI\) to make small videos out of the NORB dataset. 11 | * [GEM implementation](https://github.com/facebookresearch/GradientEpisodicMemory): Implementation of the CL strategy “Gradient Episodic Memory”. 12 | * [OpenAI Gym](https://gym.openai.com/): Open source interface that provides a ready-to-use suite of reinforcement learning tasks for evaluating performance of your algorithm. 13 | * [DeepMind Lab](https://github.com/deepmind/lab): 3D learning environment that provides a suite of challenging 3D navigation and puzzle-solving tasks for learning agents. 14 | * [DEN](https://github.com/jaehong-yoon93/DEN): TensorFlow implementation of the CL strategy “Dynamically Expandable Networks”. 15 | 16 | ## Datasets and Benchmarks 17 | 18 | * [CORe50 benchmark](https://github.com/vlomonaco/core50): Continual Learning benchmark for object recognition and robotics. 19 | * [OpenLORIS-Object](https://lifelong-robotic-vision.github.io/dataset/Data_Object-Recognition.html): A Dataset and Benchmark towards Lifelong Object Recognition 20 | * [Stream-51](https://tyler-hayes.github.io/stream51): Streaming Classification and Novelty Detection from Videos 21 | * [CRIB](https://iolfcv.github.io/): Synthetic, incremental object learning environment that can produce data that models visual imagery produced by object exploration in early infancy 22 | * [Visual Domain Decathlon](https://www.robots.ox.ac.uk/~vgg/decathlon/): Ten image classification problems representative of very different visual domains. 23 | * [iCubWord Transformation](https://robotology.github.io/iCubWorld/#icubworld-transformations-modal): a Dataset for Continual Learning and Robotics. 24 | * [Omniglot](https://github.com/brendenlake/omniglot): A dataset for few shot, meta-learning and continual learning. 25 | * [NICO](https://www.dropbox.com/sh/8mouawi5guaupyb/AAD4fdySrA6fn3PgSmhKwFgva): Towards Non-i.i.d. Image Classification. 26 | 27 | -------------------------------------------------------------------------------- /docs/the-continualai-wiki/tutorials-and-courses.md: -------------------------------------------------------------------------------- 1 | # Tutorials and Courses 2 | 3 | ## Tutorials 4 | 5 | * [Continual Learning with Neural Networks](https://docs.google.com/presentation/d/1Ukatz11S8sjC40VH293uY91rC3wQLPxiT0R-lOpju7k/edit?usp=sharing)Tutorial @ INNS Big Data and Deep Learning 2019 \[German I. Parisi and Vincenzo Lomonaco] 6 | * [Never-Ending Learning](https://sites.google.com/site/neltutorialicml19/)Tutorial @ ICML 2019 \[Tom Mitchell, Partha Talukdar] 7 | * [Lifelong Machine Learning and Computer Reading the Web](http://www.cs.uic.edu/\~liub/Lifelong-Machine-Learning-Tutorial-KDD-2016.pdf)Tutorial @ KDD-2016 \[Zhiyuan Chen, Estevam Hruschka and Bing Liu] 8 | * [Lifelong Machine Learning Tutorial](http://www.cs.uic.edu/\~liub/IJCAI15-tutorial.html)Tutorial @ IJCAI-2015 \[Zhiyuan Chen and Bing Liu] 9 | * [ContinualAI/Colab Notebooks](https://github.com/ContinualAI/colab)Check the [readme](https://github.com/ContinualAI/colab/blob/master/README.md) for a brief guide on how to run the notebooks or use the links below for a non-interactive version: 10 | * [\[Notebook\] Open-Source Frameworks for Deep Learning: an Overview](https://github.com/ContinualAI/colab/blob/master/notebooks/intro\_to\_dl\_frameworks.ipynb) 11 | * [\[Notebook\] A Gentle Introduction to Continual Learning in PyTorch](https://github.com/ContinualAI/colab/blob/master/notebooks/intro\_to\_continual\_learning.ipynb) 12 | * [\[Notebook\] A simple Example of Continual Learning with Generative Replay](https://github.com/ContinualAI/colab/blob/master/notebooks/intro\_to\_generative\_replay.ipynb) 13 | 14 | ## Courses 15 | 16 | * [Continual Learning: On Machines that can Learn Continually](https://course.continualai.org). University of Pisa, ContinualAI and IADA open-access course, 2021. 17 | * [Continual Learning: Towards Broad AI](https://sites.google.com/view/ift6760-b2021/course-description?authuser=0) Course @ Mila \[Instructor: Irina Rish; Teaching Assistants: Mojtaba Faramarzi and Touraj Laleh] 18 | * [CPE884 - Aprendizado de Máquina Continuado](http://www.pee.ufrj.br/index.php/en/informacoes-academicas/disciplinas) 19 | --------------------------------------------------------------------------------