Infini-AI-Lab / MultiverseLinks
☆79Updated this week
Alternatives and similar repositories for Multiverse
Users that are interested in Multiverse are comparing it to the libraries listed below
Sorting:
- ☆78Updated 5 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 10 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆112Updated 9 months ago
- ☆82Updated 6 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆88Updated last month
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆103Updated 3 weeks ago
- ☆52Updated 3 weeks ago
- ☆84Updated last week
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆95Updated last year
- AnchorAttention: Improved attention for LLMs long-context training☆212Updated 6 months ago
- ☆95Updated 3 months ago
- ☆112Updated last month
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆61Updated 2 weeks ago
- ☆50Updated last month
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆116Updated 3 months ago
- ☆51Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆108Updated last month
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆52Updated 8 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆85Updated 7 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆92Updated last week
- Code for "Reasoning to Learn from Latent Thoughts"☆114Updated 4 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆159Updated last week
- ☆19Updated 7 months ago
- The official code implementation for paper "R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing"☆43Updated this week
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆158Updated this week
- ☆45Updated last month
- Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.☆88Updated last month
- A repo for open research on building large reasoning models☆84Updated this week
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆38Updated 5 months ago