MadryLab / DsDm
☆47Updated last year
Alternatives and similar repositories for DsDm:
Users that are interested in DsDm are comparing it to the libraries listed below
- ☆41Updated last week
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- Exploration of automated dataset selection approaches at large scales.☆38Updated last month
- Test-time-training on nearest neighbors for large language models☆39Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- Codebase for ICML submission "DOGE: Domain Reweighting with Generalization Estimation"☆17Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆50Updated last month
- Forcing Diffuse Distributions out of Language Models☆15Updated 7 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆53Updated last year
- AI Logging for Interpretability and Explainability🔬☆111Updated 10 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆76Updated 5 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 2 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆51Updated 3 weeks ago
- ☆32Updated 4 months ago
- ☆51Updated 11 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆65Updated 8 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated 2 months ago
- Codebase for decoding compressed trust.☆23Updated 11 months ago
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆23Updated last year
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆22Updated 4 months ago
- ☆93Updated last year
- ☆54Updated 2 years ago
- ☆37Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated 11 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 5 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆54Updated 6 months ago
- General-purpose activation steering library☆59Updated 3 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆91Updated 10 months ago