sail-sg / scaling-with-vocabLinks
[NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623
β86Updated last year
Alternatives and similar repositories for scaling-with-vocab
Users that are interested in scaling-with-vocab are comparing it to the libraries listed below
Sorting:
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"β101Updated 2 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sinkβ¦β81Updated 2 weeks ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]β141Updated last year
- β96Updated 3 weeks ago
- Long Context Extension and Generalization in LLMsβ60Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimizationβ40Updated 7 months ago
- Exploration of automated dataset selection approaches at large scales.β47Updated 7 months ago
- Code for paper "Patch-Level Training for Large Language Models"β88Updated 10 months ago
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"β83Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoningβ73Updated 10 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Modelsβ78Updated last year
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teachesβ55Updated 7 months ago
- Code for "Reasoning to Learn from Latent Thoughts"β119Updated 6 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architectβ¦β79Updated last week
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Modelsβ55Updated 7 months ago
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Studyβ54Updated 10 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":β38Updated last year
- The code and data for the paper JiuZhang3.0β49Updated last year
- β73Updated 6 months ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Modelsβ61Updated 10 months ago
- β53Updated 7 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningβ111Updated 5 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scalingβ176Updated 2 months ago
- Official PyTorch Implementation of EMoE: Unlocking Emergent Modularity in Large Language Models [main conference @ NAACL2024]β35Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)β111Updated 8 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compressionβ104Updated 5 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]β88Updated 5 months ago
- β65Updated last year
- β30Updated 2 years ago
- A Sober Look at Language Model Reasoningβ83Updated 3 weeks ago