MadryLab / DsDm
☆46Updated last year
Alternatives and similar repositories for DsDm:
Users that are interested in DsDm are comparing it to the libraries listed below
- AI Logging for Interpretability and Explainability🔬☆107Updated 9 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated 3 weeks ago
- ☆36Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆43Updated last year
- ☆38Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆52Updated 11 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated last month
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆74Updated last year
- ☆37Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆90Updated 3 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆71Updated 4 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 5 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- General-purpose activation steering library☆50Updated 2 months ago
- Codebase for decoding compressed trust.☆23Updated 10 months ago
- ☆81Updated last year
- ☆30Updated 3 months ago
- ☆37Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 9 months ago
- Replicating O1 inference-time scaling laws☆83Updated 3 months ago
- ☆92Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆26Updated 5 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆53Updated 5 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated 10 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆47Updated last week
- Codebase for ICML submission "DOGE: Domain Reweighting with Generalization Estimation"☆15Updated last year