p-lambda / wildsLinks
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.
☆569Updated last year
Alternatives and similar repositories for wilds
Users that are interested in wilds are comparing it to the libraries listed below
Sorting:
- PyTorch code to run synthetic experiments.☆423Updated 3 years ago
- ☆470Updated 2 months ago
- Distributionally robust neural networks for group shifts☆269Updated 2 years ago
- Project site for "Your Classifier is Secretly an Energy-Based Model and You Should Treat it Like One"☆423Updated 2 years ago
- Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using co…☆334Updated 2 years ago
- An implementation of the BADGE batch active learning algorithm.☆207Updated last year
- This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence…☆335Updated last year
- Optimal Transport Dataset Distance☆167Updated 3 years ago
- Concept Bottleneck Models, ICML 2020☆204Updated 2 years ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆74Updated 2 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆160Updated last year
- Original dataset release for CIFAR-10H☆83Updated 4 years ago
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)☆231Updated 3 years ago
- Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.☆176Updated last year
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true cla…☆241Updated 2 years ago
- A clean and simple data loading library for Continual Learning☆434Updated 2 years ago
- A simple way to calibrate your neural network.☆1,153Updated 3 years ago
- Understanding Training Dynamics of Deep ReLU Networks☆293Updated last month
- ☆415Updated 3 years ago
- The net:cal calibration framework is a Python 3 library for measuring and mitigating miscalibration of uncertainty estimates, e.g., by a …☆363Updated 11 months ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlig…☆150Updated 2 years ago
- Code for the paper "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks".☆346Updated 5 years ago
- NumPy library for calibration metrics☆73Updated 4 months ago
- Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)☆350Updated 2 months ago
- Code repo for "A Simple Baseline for Bayesian Uncertainty in Deep Learning"☆468Updated 2 years ago
- Reusable BatchBALD implementation☆79Updated last year
- ☆111Updated 2 years ago
- This repo contains a PyTorch implementation of the paper: "Evidential Deep Learning to Quantify Classification Uncertainty"☆478Updated last year
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"☆272Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆153Updated 3 years ago