cleanlab / label-errors
π οΈ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet
β182Updated 2 years ago
Alternatives and similar repositories for label-errors:
Users that are interested in label-errors are comparing it to the libraries listed below
- REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets --- https://arxiv.org/abs/2004.07999β110Updated 2 years ago
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)β227Updated 3 years ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.β72Updated 2 years ago
- β135Updated last year
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"β160Updated last year
- Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coβ¦β327Updated last year
- PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures (CVPR 2022)β105Updated 2 years ago
- β468Updated 7 months ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from tβ¦β77Updated 2 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)β109Updated 2 years ago
- ImageNet Testbed, associated with the paper "Measuring Robustness to Natural Distribution Shifts in Image Classification."β117Updated last year
- understanding model mistakes with human annotationsβ106Updated 2 years ago
- Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training wiβ¦β52Updated 3 years ago
- β95Updated 2 years ago
- Code for ICML 2022 paper "Out-of-distribution Detection with Deep Nearest Neighbors"β179Updated 7 months ago
- Framework code with wandb, checkpointing, logging, configs, experimental protocols. Useful for fine-tuning models or training from scratcβ¦β149Updated 2 years ago
- A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.β558Updated last year
- Drift Detection for your PyTorch Modelsβ315Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β77Updated last year
- Train ImageNet *fast* in 500 lines of code with FFCVβ139Updated 9 months ago
- β140Updated 4 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repairβ47Updated last year
- Labels and other data for the paper "Are we done with ImageNet?"β190Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ145Updated 3 years ago
- Calibration of Convolutional Neural Networksβ160Updated last year
- DISTIL: Deep dIverSified inTeractIve Learning. An active/inter-active learning library built on py-torch for reducing labeling costs.β147Updated 2 years ago
- This repository contains the code of the distribution shift framework presented in A Fine-Grained Analysis on Distribution Shift (Wiles eβ¦β82Updated 4 months ago
- Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)β340Updated 6 months ago
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.β205Updated last year
- ImageNet-R(endition) and DeepAugment (ICCV 2021)β261Updated 3 years ago