sail-sg / scaling-with-vocabLinks
[NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623
β88Updated last year
Alternatives and similar repositories for scaling-with-vocab
Users that are interested in scaling-with-vocab are comparing it to the libraries listed below
Sorting:
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"β102Updated 2 weeks ago
- Long Context Extension and Generalization in LLMsβ62Updated last year
- Exploration of automated dataset selection approaches at large scales.β47Updated 7 months ago
- Code for paper "Patch-Level Training for Large Language Models"β89Updated 11 months ago
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"β83Updated last year
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teachesβ56Updated 7 months ago
- AnchorAttention: Improved attention for LLMs long-context trainingβ213Updated 9 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]β142Updated last year
- Revisiting Mid-training in the Era of Reinforcement Learning Scalingβ177Updated 3 months ago
- Large Language Models Can Self-Improve in Long-context Reasoningβ73Updated 11 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Modelsβ78Updated last year
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compressionβ112Updated 6 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sinkβ¦β95Updated last month
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimizationβ41Updated 7 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architectβ¦β119Updated last month
- β98Updated last month
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Modelsβ61Updated 11 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":β38Updated last year
- β85Updated 9 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"β48Updated last year
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimizationβ77Updated 3 weeks ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Modelsβ55Updated 8 months ago
- β65Updated last year
- β19Updated 9 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Modelsβ145Updated 4 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningβ114Updated 5 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"β39Updated last year
- A Sober Look at Language Model Reasoningβ85Updated 2 weeks ago
- Code for "Reasoning to Learn from Latent Thoughts"β121Updated 6 months ago
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Studyβ55Updated 11 months ago