cleanlab / label-errors
π οΈ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet
β184Updated 2 years ago
Alternatives and similar repositories for label-errors:
Users that are interested in label-errors are comparing it to the libraries listed below
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)β230Updated 3 years ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.β72Updated 2 years ago
- REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets --- https://arxiv.org/abs/2004.07999β110Updated 2 years ago
- Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coβ¦β330Updated last year
- Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training wiβ¦β53Updated 3 years ago
- PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures (CVPR 2022)β106Updated 2 years ago
- β137Updated last year
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"β160Updated last year
- DISTIL: Deep dIverSified inTeractIve Learning. An active/inter-active learning library built on py-torch for reducing labeling costs.β149Updated 2 years ago
- Labels and other data for the paper "Are we done with ImageNet?"β191Updated 3 years ago
- β468Updated 9 months ago
- ImageNet Testbed, associated with the paper "Measuring Robustness to Natural Distribution Shifts in Image Classification."β118Updated last year
- β95Updated 2 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)β109Updated 2 years ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"β180Updated 3 years ago
- This repository contains the code of the distribution shift framework presented in A Fine-Grained Analysis on Distribution Shift (Wiles eβ¦β83Updated last month
- β205Updated 2 years ago
- Code for ICML 2022 paper "Out-of-distribution Detection with Deep Nearest Neighbors"β183Updated 9 months ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ150Updated 3 years ago
- Train ImageNet *fast* in 500 lines of code with FFCVβ142Updated 11 months ago
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zβ¦β125Updated 3 years ago
- Drift Detection for your PyTorch Modelsβ316Updated 2 years ago
- ImageNet-R(endition) and DeepAugment (ICCV 2021)β264Updated 3 years ago
- Framework code with wandb, checkpointing, logging, configs, experimental protocols. Useful for fine-tuning models or training from scratcβ¦β150Updated 2 years ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from tβ¦β78Updated 2 years ago
- A new test set for ImageNetβ250Updated last year
- Combating hidden stratification with GEORGEβ63Updated 3 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- understanding model mistakes with human annotationsβ106Updated 2 years ago
- Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)β345Updated last week