aimagelab / LiDER
Official implementation of "On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning"
☆14Updated 2 years ago
Alternatives and similar repositories for LiDER
Users that are interested in LiDER are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)☆124Updated 2 years ago
- PyTorch implementation of our NeurIPS 2021 paper "Class-Incremental Learning via Dual Augmentation"☆37Updated 2 years ago
- Co^2L: Contrastive Continual Learning (ICCV2021)☆95Updated 3 years ago
- PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"☆110Updated 2 years ago
- Code Implementation for CVPR 2023 Paper: Class-Incremental Exemplar Compression for Class-Incremental Learning☆24Updated last year
- Code for NeurIPS 2023 paper - FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning☆39Updated 6 months ago
- RanPAC: Random Projections and Pre-trained Models for Continual Learning - Official code repository for NeurIPS 2023 Published Paper☆47Updated 2 months ago
- ☆57Updated 3 years ago
- ☆35Updated 3 years ago
- ☆31Updated last year
- Code for "A Comprehensive Empirical Evaluation on Online Continual Learning" ICCVW 2023 VCL Workshop☆38Updated last year
- Code for the paper "Representational Continuity for Unsupervised Continual Learning" (ICLR 22)☆96Updated 2 years ago
- A PyTorch framework for Continual Learning research.☆21Updated last year
- Code for our CVPR 2022 workshop paper "Towards Exemplar-Free Continual Learning in Vision Transformers"☆23Updated 2 years ago
- [Spotlight ICLR 2023 paper] Continual evaluation for lifelong learning with neural networks, identifying the stability gap.☆28Updated 2 years ago
- Codebase for Continual Prototype Evolution (CoPE) to attain perpetually representative prototypes for online and non-stationary datastrea…☆44Updated 3 years ago
- Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need (IJCV 2024)