zeyademam / active_learning
Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.
☆52Updated 3 years ago
Alternatives and similar repositories for active_learning:
Users that are interested in active_learning are comparing it to the libraries listed below
- The official code for the publication: "The Close Relationship Between Contrastive Learning and Meta-Learning".☆19Updated 2 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆55Updated 2 years ago
- ☆54Updated 4 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆95Updated last year
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆78Updated 2 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆47Updated last year
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆70Updated 9 months ago
- ☆46Updated 4 years ago
- ImageNet Testbed, associated with the paper "Measuring Robustness to Natural Distribution Shifts in Image Classification."☆118Updated last year
- Code for the paper "A Whac-A-Mole Dilemma Shortcuts Come in Multiples Where Mitigating One Amplifies Others"☆47Updated 7 months ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- ☆105Updated last year
- ☆57Updated 2 years ago
- ☆42Updated 4 months ago
- LISA for ICML 2022☆47Updated last year
- Learning from Failure: Training Debiased Classifier from Biased Classifier (NeurIPS 2020)☆90Updated 4 years ago
- ☆27Updated 3 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- ☆44Updated 2 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆85Updated 2 years ago
- ☆22Updated 2 years ago
- ☆58Updated 3 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 2 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆55Updated 2 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- ☆62Updated 3 years ago