aks2203 / easy-to-hard-dataLinks
Pytorch Datasets for Easy-To-Hard
☆27Updated 6 months ago
Alternatives and similar repositories for easy-to-hard-data
Users that are interested in easy-to-hard-data are comparing it to the libraries listed below
Sorting:
- ☆29Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆18Updated 2 years ago
- Distilling Model Failures as Directions in Latent Space☆47Updated 2 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆59Updated last year
- ☆16Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- ☆14Updated last year
- ☆34Updated last year
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Updated 2 years ago
- ☆55Updated 2 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Updated last year
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆24Updated last year
- A simple Jax implementation of influence functions.☆16Updated last year
- A simple and efficient baseline for data attribution☆11Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆37Updated 2 years ago
- ☆38Updated 4 years ago
- ☆35Updated 6 months ago
- Code for the paper "Data Feedback Loops: Model-driven Amplification of Dataset Biases"☆16Updated 2 years ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆48Updated last year
- A centralized place for deep thinking code and experiments☆85Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- ☆60Updated 3 years ago
- A simple PyTorch implementation of influence functions.☆89Updated last year
- ☆29Updated 2 years ago
- ☆12Updated 3 years ago
- ☆36Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆79Updated 8 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year