Multiverse4FM / MultiverseLinks
☆90Updated 7 months ago
Alternatives and similar repositories for Multiverse
Users that are interested in Multiverse are comparing it to the libraries listed below
Sorting:
- ☆110Updated 4 months ago
- ☆64Updated 6 months ago
- ☆129Updated 8 months ago
- Spectral Sphere Optimizer☆94Updated 3 weeks ago
- [ICLR 2026] TraceRL & TraDo-8B: Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆419Updated last week
- Easy and Efficient dLLM Fine-Tuning☆208Updated 2 weeks ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆61Updated 4 months ago
- [ICML 2025] Reward-guided Speculative Decoding (RSD) for efficiency and effectiveness.☆55Updated 9 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆131Updated 2 months ago
- ☆119Updated 4 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆289Updated 3 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆191Updated 7 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆89Updated 11 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆197Updated 2 months ago
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆510Updated 2 months ago
- ☆82Updated 10 months ago
- MrlX: A Multi-Agent Reinforcement Learning Framework☆189Updated 2 weeks ago
- Residual Context Diffusion (RCD): Repurposing discarded signals as structured priors for high-performance reasoning in dLLMs.☆45Updated this week
- ☆221Updated 2 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆218Updated 8 months ago
- ☆142Updated 2 weeks ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆119Updated last year
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆253Updated 5 months ago
- The official repository of paper "Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models''☆110Updated 5 months ago
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆196Updated 2 weeks ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆188Updated 4 months ago
- [ICLR 2026] dParallel: Learnable Parallel Decoding for dLLMs☆58Updated last week
- ☆41Updated 10 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆267Updated 7 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆468Updated 8 months ago