jxiw / M1Links
M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models
β23Updated this week
Alternatives and similar repositories for M1
Users that are interested in M1 are comparing it to the libraries listed below
Sorting:
- π₯ A minimal training framework for scaling FLA modelsβ170Updated last week
- β104Updated 2 weeks ago
- β85Updated 2 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoringβ166Updated this week
- β51Updated 3 months ago
- Efficient triton implementation of Native Sparse Attention.β168Updated last month
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"β88Updated last month
- β82Updated last month
- β114Updated 3 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>β134Updated 3 weeks ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ113Updated last month
- Code for "Reasoning to Learn from Latent Thoughts"β104Updated 2 months ago
- β58Updated this week
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.β81Updated 6 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Modelsβ92Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ221Updated last month
- β96Updated 8 months ago
- Stick-breaking attentionβ57Updated last week
- Implementation of π₯₯ Coconut, Chain of Continuous Thought, in Pytorchβ175Updated this week
- β76Updated 3 months ago
- β80Updated 5 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"β233Updated 2 weeks ago
- The evaluation framework for training-free sparse attention in LLMsβ69Updated this week
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Modeβ¦β108Updated 9 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"β85Updated last year
- Repo for "Z1: Efficient Test-time Scaling with Code"β61Updated 2 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [arXiv '25]β39Updated last month
- [ICML 2025] Fourier Position Embedding: Enhancing Attentionβs Periodic Extension for Length Generalizationβ71Updated 3 weeks ago
- [NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623β85Updated 8 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β189Updated 3 months ago