ryoungj / ObsScaling
[NeurIPS'24 Spotlight] Observational Scaling Laws
☆44Updated last month
Related projects ⓘ
Alternatives and complementary repositories for ObsScaling
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆127Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆97Updated 2 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆118Updated 3 weeks ago
- ☆114Updated 4 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆91Updated 4 months ago
- ☆54Updated last month
- ☆90Updated 4 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆69Updated last month
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆160Updated 3 months ago
- A brief and partial summary of RLHF algorithms.☆64Updated this week
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆65Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆46Updated 5 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆75Updated last month
- Self-Alignment with Principle-Following Reward Models☆148Updated 8 months ago
- ☆22Updated this week
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆84Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆63Updated last month
- AI Logging for Interpretability and Explainability🔬☆89Updated 5 months ago
- Directional Preference Alignment☆50Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆195Updated 2 weeks ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆107Updated 4 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆123Updated 8 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆77Updated last month
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆80Updated last week
- The official repository of the Omni-MATH benchmark.☆49Updated 2 weeks ago
- ☆101Updated 5 months ago
- ☆51Updated 7 months ago
- ☆88Updated 11 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆97Updated 7 months ago