aks2203 / easy-to-hard-data
Pytorch Datasets for Easy-To-Hard
☆27Updated 3 months ago
Alternatives and similar repositories for easy-to-hard-data:
Users that are interested in easy-to-hard-data are comparing it to the libraries listed below
- ☆32Updated 3 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆54Updated 2 years ago
- ☆17Updated 2 years ago
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆22Updated last year
- ☆34Updated last year
- ☆14Updated last year
- ☆16Updated last year
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆76Updated 5 months ago
- A simple and efficient baseline for data attribution☆11Updated last year
- Official Repository for ICML 2023 paper "Can Neural Network Memorization Be Localized?"☆18Updated last year
- ☆28Updated last year
- Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion☆11Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆36Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- ☆60Updated 3 years ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated 2 months ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆52Updated last month
- ☆42Updated 2 months ago
- ☆42Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆58Updated 3 years ago
- Codebase for Obfuscated Activations Bypass LLM Latent-Space Defenses☆15Updated 2 months ago
- What do we learn from inverting CLIP models?☆54Updated last year
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- ☆54Updated 4 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆20Updated last year
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- ☆41Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago