sanketvmehta / lifelong-learning-pretraining-and-sam
Code for the paper "Mehta, S. V., Patil, D., Chandar, S., & Strubell, E. (2023). An Empirical Investigation of the Role of Pre-training in Lifelong Learning. The Journal of Machine Learning Research 24 (2023)"
☆16Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for lifelong-learning-pretraining-and-sam
- Bayesian Low-Rank Adaptation for Large Language Models☆28Updated 5 months ago
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆14Updated 10 months ago
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆28Updated last year
- Source code for "Gradient Based Memory Editing for Task-Free Continual Learning", 4th Lifelong ML Workshop@ICML 2020☆17Updated last year
- ☆25Updated 4 months ago
- Code for "Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent" (ICML 2020 - Lifelong Learning Workshop)☆41Updated 2 years ago
- ☆19Updated 2 years ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆28Updated last month
- ☆38Updated 2 weeks ago
- Codebase for the paper titled "Continual learning with local module selection"☆25Updated 3 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Continual Learning in Low-rank Orthogonal Subspaces (NeurIPS'20)☆35Updated 4 years ago
- This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regulari…☆21Updated last year
- Bayesian low-rank adaptation for large language models☆23Updated 6 months ago
- ☆34Updated 3 months ago
- ☆30Updated 3 years ago
- Code for the paper "Representational Continuity for Unsupervised Continual Learning" (ICLR 22)☆94Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆24Updated 3 weeks ago
- Code and results accompanying our paper titled RLSbench: Domain Adaptation under Relaxed Label Shift☆35Updated last year
- CLEAR benchmark (NeurIPS 2021 Dataset & Benchmark)☆23Updated last year
- Official codebase of the "Rehearsal revealed:The limits and merits of revisiting samples in continual learning" paper.☆27Updated 3 years ago
- [NeurIPS'22] Official Repository for Characterizing Datapoints via Second-Split Forgetting☆14Updated last year
- Code for NeurIPS 2021 paper "Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning".☆16Updated 3 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆23Updated 2 years ago
- ☆34Updated 9 months ago
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆35Updated 2 years ago
- LISA for ICML 2022☆47Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- CoSCL: Cooperation of Small Continual Learners is Stronger than a Big One (ECCV 2022)☆17Updated last year
- Continual learning of task-specific approximations of the parameter posterior distribution via a shared hypernetwork.☆16Updated 3 weeks ago