Multiverse4FM / MultiverseLinks
☆78Updated 4 months ago
Alternatives and similar repositories for Multiverse
Users that are interested in Multiverse are comparing it to the libraries listed below
Sorting:
- ☆96Updated last month
- ☆61Updated 3 months ago
- ☆118Updated 4 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆115Updated 11 months ago
- The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆167Updated 3 months ago
- [ICML 2025] Reward-guided Speculative Decoding (RSD) for efficiency and effectiveness.☆47Updated 5 months ago
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆58Updated 7 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆53Updated 2 weeks ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 8 months ago
- [EMNLP'2025 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆64Updated 6 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆164Updated last month
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆106Updated 4 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆553Updated last week
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆253Updated 2 weeks ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆197Updated 4 months ago
- Implementation of Negative-aware Finetuning (NFT) algorithm for "Bridging Supervised Learning and Reinforcement Learning in Math Reasonin…☆43Updated last month
- ☆85Updated 9 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆146Updated 3 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆236Updated 2 months ago
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆95Updated 3 months ago
- The official repository of paper "Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models''☆91Updated 2 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆191Updated 2 weeks ago
- ☆101Updated last month
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆121Updated 3 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆177Updated 2 months ago
- ☆333Updated 2 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated 9 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆156Updated 3 weeks ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆220Updated last month
- ☆38Updated 7 months ago