Continual learning and Benchmarking continual learning

The paper reviews various continual learning techniques, including parameter isolation, regularization, and replay methods. It also discusses established and emerging benchmarks, providing a broad overview of methods and datasets used in continual supervised classification.


This content originally appeared on HackerNoon and was authored by The FewShot Prompting Publication

:::info Authors:

(1) Sebastian Dziadzio, University of Tübingen (sebastian.dziadzio@uni-tuebingen.de);

(2) Çagatay Yıldız, University of Tübingen;

(3) Gido M. van de Ven, KU Leuven;

(4) Tomasz Trzcinski, IDEAS NCBR, Warsaw University of Technology, Tooploox;

(5) Tinne Tuytelaars, KU Leuven;

(6) Matthias Bethge, University of Tübingen.

:::

Abstract and 1. Introduction

2. Two problems with the current approach to class-incremental continual learning

3. Methods and 3.1. Infinite dSprites

3.2. Disentangled learning

4. Related work

4.1. Continual learning and 4.2. Benchmarking continual learning

5. Experiments

5.1. Regularization methods and 5.2. Replay-based methods

5.3. Do we need equivariance?

5.4. One-shot generalization and 5.5. Open-set classification

5.6. Online vs. offline

Conclusion, Acknowledgments and References

Supplementary Material

4. Related work

4.1. Continual learning

Continual learning literature typically focuses on catastrophic forgetting in supervised classification. Parameter isolation methods use dedicated parameters for each task by periodically extending the architecture while freezing already trained parameters [33] or by relying on isolated subnetworks [6]. Regularization approaches aim to preserve existing knowledge by limiting the plasticity of the network. Functional regularization methods constrain the network output through knowledge distillation [17] or by using a small set of anchor points to build a functional prior [26, 36]. Weight regularization methods [39] directly constrain network parameters according to their estimated importance for previous tasks. In particular, Variational Continual Learning (VCL) [25] derives the importance estimate by framing continual learning as sequential approximate Bayesian inference. Most methods incorporate regularization into the objective function, but it is also possible to implement it using constrained optimization [2, 10, 13, 22]. Finally, replay methods [4, 12, 30, 32] retain knowledge through rehearsal. When learning a new task, the network is trained with a mix of new samples from the training stream and previously seen samples drawn from the memory buffer. A specific case of this strategy is generative replay [3, 34], where the rehearsal samples are produced by a generative model trained to approximate the data distribution for each class. Many continual learning methods are hybrid systems that mix and match the above techniques.

4.2. Benchmarking continual learning

Established continual learning benchmarks primarily involve splitting existing computer vision datasets into discrete, nonoverlapping segments to study continual supervised classification. Notable examples in this domain include split MNIST [39], split CIFAR [39], and split MiniImageNet [1, 4], along with their augmented counterparts, such as rotated MNIST [22], and permuted MNIST [15]. More recently, contributions from Lomonaco and Maltoni [20], Verwimp et al. [38] and Roady et al. [31] have enriched the field with dataset designed specifically for continual learning, such as CORe50, CLAD, and Stream-51, which comprise temporally correlated images with diverse backgrounds and environments.

\

:::info This paper is available on arxiv under CC 4.0 license.

:::

\


This content originally appeared on HackerNoon and was authored by The FewShot Prompting Publication


Print Share Comment Cite Upload Translate Updates
APA

The FewShot Prompting Publication | Sciencx (2024-08-27T22:59:56+00:00) Continual learning and Benchmarking continual learning. Retrieved from https://www.scien.cx/2024/08/27/continual-learning-and-benchmarking-continual-learning/

MLA
" » Continual learning and Benchmarking continual learning." The FewShot Prompting Publication | Sciencx - Tuesday August 27, 2024, https://www.scien.cx/2024/08/27/continual-learning-and-benchmarking-continual-learning/
HARVARD
The FewShot Prompting Publication | Sciencx Tuesday August 27, 2024 » Continual learning and Benchmarking continual learning., viewed ,<https://www.scien.cx/2024/08/27/continual-learning-and-benchmarking-continual-learning/>
VANCOUVER
The FewShot Prompting Publication | Sciencx - » Continual learning and Benchmarking continual learning. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/08/27/continual-learning-and-benchmarking-continual-learning/
CHICAGO
" » Continual learning and Benchmarking continual learning." The FewShot Prompting Publication | Sciencx - Accessed . https://www.scien.cx/2024/08/27/continual-learning-and-benchmarking-continual-learning/
IEEE
" » Continual learning and Benchmarking continual learning." The FewShot Prompting Publication | Sciencx [Online]. Available: https://www.scien.cx/2024/08/27/continual-learning-and-benchmarking-continual-learning/. [Accessed: ]
rf:citation
» Continual learning and Benchmarking continual learning | The FewShot Prompting Publication | Sciencx | https://www.scien.cx/2024/08/27/continual-learning-and-benchmarking-continual-learning/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.