baharanm / craigLinks
Data-efficient Training of Machine Learning Models
☆63Updated 4 years ago
Alternatives and similar repositories for craig
Users that are interested in craig are comparing it to the libraries listed below
Sorting:
- Coresets via Bilevel Optimization☆66Updated 4 years ago
- Tilted Empirical Risk Minimization (ICLR '21)☆59Updated last year
- Reusable BatchBALD implementation☆79Updated last year
- ☆50Updated 2 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆31Updated 4 years ago
- Pytorch Implementation of the Nonlinear Information Bottleneck☆40Updated 11 months ago
- Awesome coreset/core-set/subset/sample selection works.☆175Updated 11 months ago
- Regularized Learning under label shifts☆18Updated 6 years ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30Updated last year
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆57Updated 2 years ago
- Implementation of Minimax Pareto Fairness framework☆21Updated 4 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆61Updated 4 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆49Updated 4 years ago
- Code for the paper: "Tensor Programs II: Neural Tangent Kernel for Any Architecture"☆105Updated 4 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- ☆22Updated 6 years ago
- ☆62Updated 4 years ago
- Code repository for the paper "Invariant and Transportable Representations for Anti-Causal Domain Shifts"☆16Updated 2 years ago
- [NeurIPS 2020] Coresets for Robust Training of Neural Networks against Noisy Labels☆34Updated 4 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- This repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery …☆51Updated 11 months ago
- Continual Learning with Hypernetworks. A continual learning approach that has the flexibility to learn a dedicated set of parameters, fin…☆164Updated 3 years ago
- This is the source code for Learning Deep Kernels for Non-Parametric Two-Sample Tests (ICML2020).☆49Updated 4 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- Federated posterior averaging implemented in JAX☆51Updated 2 years ago
- A curated list of papers and resources about the distribution shift in machine learning.☆120Updated last year
- ☆23Updated 2 years ago
- Official implementation of Learning The Pareto Front With HyperNetworks [ICLR 2021]☆105Updated 3 years ago
- ☆59Updated 2 years ago
- ☆37Updated 4 years ago