acmi-lab / RLSbenchLinks
Code and results accompanying our paper titled RLSbench: Domain Adaptation under Relaxed Label Shift
☆34Updated last year
Alternatives and similar repositories for RLSbench
Users that are interested in RLSbench are comparing it to the libraries listed below
Sorting:
- ☆45Updated 2 years ago
- LISA for ICML 2022☆49Updated 2 years ago
- Benchmark for Natural Temporal Distribution Shift (NeurIPS 2022)☆66Updated 2 years ago
- ☆30Updated 3 years ago
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆36Updated 2 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆24Updated 3 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Codebase for the paper titled "Continual learning with local module selection"☆25Updated 3 years ago
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆28Updated last year
- A regularized self-labeling approach to improve the generalization and robustness of fine-tuned models☆28Updated 2 years ago
- This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regulari…☆21Updated 2 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago
- ☆107Updated last year
- Learning Representations that Support Robust Transfer of Predictors☆20Updated 3 years ago
- Code for Environment Inference for Invariant Learning (ICML 2021 Paper)☆50Updated 3 years ago
- ☆12Updated last year
- ☆10Updated 3 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆86Updated 3 years ago
- ☆36Updated 2 years ago
- [NeurIPS'22] Official Repository for Characterizing Datapoints via Second-Split Forgetting☆15Updated last year
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆56Updated 2 years ago
- ☆36Updated 3 years ago
- Provably (and non-vacuously) bounding test error of deep neural networks under distribution shift with unlabeled test data.☆10Updated last year
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆17Updated last year
- ☆18Updated 3 years ago
- ☆27Updated last year
- The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization".☆12Updated 3 years ago
- The official code for the publication: "The Close Relationship Between Contrastive Learning and Meta-Learning".☆19Updated 2 years ago
- ☆44Updated last month
- ☆23Updated 2 years ago