MadryLab / DsDmLinks
☆51Updated last year
Alternatives and similar repositories for DsDm
Users that are interested in DsDm are comparing it to the libraries listed below
Sorting:
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆187Updated last year
- ☆41Updated 2 years ago
- ☆52Updated 8 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆63Updated 4 months ago
- ☆103Updated 2 years ago
- Test-time-training on nearest neighbors for large language models☆48Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆46Updated 8 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆32Updated 11 months ago
- ☆60Updated 7 months ago
- ☆43Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆30Updated 2 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- ☆107Updated last year
- Replicating O1 inference-time scaling laws☆91Updated last year
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆92Updated 2 years ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- ☆80Updated 3 years ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated 2 years ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆41Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆144Updated last year
- AI Logging for Interpretability and Explainability🔬☆135Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆93Updated last year
- ☆51Updated 2 years ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆20Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆80Updated 2 years ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆158Updated 6 months ago