jlko / active-testing
Active and Sample-Efficient Model Evaluation
☆24Updated 4 years ago
Alternatives and similar repositories for active-testing:
Users that are interested in active-testing are comparing it to the libraries listed below
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- ☆45Updated 2 years ago
- (ICML 2021) Mandoline: Model Evaluation under Distribution Shift☆31Updated 3 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆40Updated 4 years ago
- Combating hidden stratification with GEORGE☆63Updated 3 years ago
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.☆40Updated 3 years ago
- ☆34Updated 4 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Improving Transformation Invariance in Contrastive Representation Learning☆13Updated 4 years ago
- ☆34Updated 4 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆56Updated last year
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆23Updated 3 years ago
- Reusable BatchBALD implementation☆79Updated last year
- ☆46Updated 4 years ago
- Domain Adaptation☆23Updated 3 years ago
- Code and results accompanying our paper titled RLSbench: Domain Adaptation under Relaxed Label Shift☆34Updated last year
- Visual Representation Learning Benchmark for Self-Supervised Models☆36Updated last year
- ☆35Updated last year
- A regularized self-labeling approach to improve the generalization and robustness of fine-tuned models☆28Updated 2 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆35Updated last year
- ☆34Updated 3 years ago
- ☆18Updated 3 years ago
- Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.☆47Updated 3 years ago
- Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi …☆65Updated 4 years ago