reds-lab / LAVALinks
This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).
☆48Updated last year
Alternatives and similar repositories for LAVA
Users that are interested in LAVA are comparing it to the libraries listed below
Sorting:
- Scalable data valuation using optimal transport (ICLR 2025)☆13Updated 3 weeks ago
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆80Updated last year
- A simple PyTorch implementation of influence functions.☆89Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- ☆34Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Official Repository for ICML 2023 paper "Can Neural Network Memorization Be Localized?"☆19Updated last year
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆81Updated last month
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆58Updated 3 years ago
- Understanding Rare Spurious Correlations in Neural Network☆12Updated 3 years ago
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)☆99Updated 6 months ago
- ☆22Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆103Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆52Updated 2 years ago
- A fast, effective data attribution method for neural networks in PyTorch☆214Updated 8 months ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Updated 2 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- Distilling Model Failures as Directions in Latent Space☆47Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆76Updated last year
- ☆46Updated 11 months ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, …☆22Updated 2 months ago
- [NeurIPS 2021] Fast Certified Robust Training with Short Warmup☆24Updated last month
- ☆30Updated 2 years ago
- ☆71Updated 3 years ago
- ☆55Updated 2 years ago