p-lambda / wildsLinks
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.
☆578Updated last year
Alternatives and similar repositories for wilds
Users that are interested in wilds are comparing it to the libraries listed below
Sorting:
- ☆471Updated 5 months ago
- Distributionally robust neural networks for group shifts☆280Updated 2 years ago
- PyTorch code to run synthetic experiments.☆427Updated 4 years ago
- An implementation of the BADGE batch active learning algorithm.☆208Updated last year
- Project site for "Your Classifier is Secretly an Energy-Based Model and You Should Treat it Like One"☆425Updated 3 years ago
- Optimal Transport Dataset Distance☆170Updated 3 years ago
- The net:cal calibration framework is a Python 3 library for measuring and mitigating miscalibration of uncertainty estimates, e.g., by a …☆366Updated last year
- A simple way to calibrate your neural network.☆1,162Updated 2 months ago
- Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using co…☆342Updated 2 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlig…☆151Updated 2 years ago
- Original dataset release for CIFAR-10H☆83Updated 4 years ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆74Updated 2 years ago
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)☆237Updated 3 years ago
- This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence…☆340Updated last year
- Concept Bottleneck Models, ICML 2020☆216Updated 2 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆161Updated last year
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true cla…☆248Updated 2 years ago
- ☆421Updated 4 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆158Updated 3 years ago
- NumPy library for calibration metrics☆73Updated 7 months ago
- Code repo for "A Simple Baseline for Bayesian Uncertainty in Deep Learning"☆472Updated 2 years ago
- This repo contains a PyTorch implementation of the paper: "Evidential Deep Learning to Quantify Classification Uncertainty"☆493Updated last year
- A clean and simple data loading library for Continual Learning☆439Updated 2 years ago
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"☆272Updated 3 years ago
- Combating hidden stratification with GEORGE☆64Updated 4 years ago
- Literature survey, paper reviews, experimental setups and a collection of implementations for baselines methods for predictive uncertaint…☆632Updated 3 years ago
- ☆334Updated 2 months ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆374Updated 3 years ago
- Code for the paper "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks".☆347Updated 6 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆144Updated 2 years ago