JuliousHurtado / meta-training-setup
This repositorie es the code of the paper Optimizing Reusable Knowledge for Continual Learning via Metalearning.
☆11Updated 3 years ago
Alternatives and similar repositories for meta-training-setup
Users that are interested in meta-training-setup are comparing it to the libraries listed below
Sorting:
- Codebase for the paper titled "Continual learning with local module selection"☆25Updated 3 years ago
- Source code for "Gradient Based Memory Editing for Task-Free Continual Learning", 4th Lifelong ML Workshop@ICML 2020☆17Updated 2 years ago
- Code Release for Learning to Adapt to Evolving Domains☆31Updated 3 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated last year
- "Scalable and Order-robust Continual Learning with Additive Parameter Decomposition", ICLR 2020☆22Updated 2 years ago
- [NeurIPS 2020, Spotlight] Improved Schemes for Episodic Memory-based Lifelong Learning☆18Updated 4 years ago
- [Spotlight ICLR 2023 paper] Continual evaluation for lifelong learning with neural networks, identifying the stability gap.☆28Updated 2 years ago
- AFEC: Active Forgetting of Negative Transfer in Continual Learning (NeurIPS 2021)☆26Updated last year
- PyTorch implementation of POEM (Out-of-distribution detection with posterior sampling), ICML 2022☆28Updated 2 years ago
- ☆43Updated last month
- Official Implementation of CVPR2021 paper: Continual Learning via Bit-Level Information Preserving☆39Updated 2 years ago
- Code for our paper: Samuel and Chechik, "Distributional Robustness Loss for Long-tail Learning"☆32Updated 3 years ago
- ☆23Updated 2 years ago
- This is a public repository for:☆38Updated 3 years ago
- ☆26Updated 3 years ago
- This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regulari…☆21Updated 2 years ago
- ☆26Updated 2 years ago
- ☆61Updated 3 years ago
- ☆16Updated 2 years ago
- ☆29Updated 3 years ago
- Test-Time Adaptation via Conjugate Pseudo-Labels☆40Updated last year
- ☆49Updated 2 years ago
- Code for NeurIPS 2021 paper "Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning".☆16Updated 3 years ago
- ☆23Updated 3 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆56Updated 2 years ago
- ☆23Updated last year
- ☆64Updated 3 years ago
- ☆14Updated 4 years ago
- ☆22Updated last year
- Memory Replay with Data Compression (ICLR 2022)☆15Updated last year