MoBA: Mixture of Block Attention for Long-Context LLMs
β2,100Apr 3, 2025Updated last year
Alternatives and similar repositories for MoBA
Users that are interested in MoBA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Muon is Scalable for LLM Trainingβ1,459Aug 3, 2025Updated 8 months ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β989Feb 5, 2026Updated 2 months ago
- π Efficient implementations for emerging model architecturesβ4,999Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,207Apr 8, 2026Updated 3 weeks ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ799Aug 15, 2025Updated 8 months ago
- Deploy on Railway without the complexity - Free Credits Offer β’ AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- FlashMLA: Efficient Multi-head Latent Attention Kernelsβ12,564Apr 7, 2026Updated 3 weeks ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attentionβ3,417Jul 7, 2025Updated 9 months ago
- A sparse attention kernel supporting mix sparse patternsβ503Jan 18, 2026Updated 3 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.β5,186Updated this week
- FlashInfer: Kernel Library for LLM Servingβ5,498Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ540Feb 10, 2025Updated last year
- verl/HybridFlow: A Flexible and Efficient RL Post-Training Frameworkβ20,930Updated this week
- DeepEP: an efficient expert-parallel communication libraryβ9,199Updated this week
- Official Repo for Open-Reasoner-Zeroβ2,091Jun 2, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- β815Jun 9, 2025Updated 10 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,180Jul 15, 2025Updated 9 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,414Apr 22, 2026Updated last week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.β2,943Jan 14, 2026Updated 3 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ381Jul 10, 2025Updated 9 months ago
- Fast and memory-efficient exact attentionβ23,563Updated this week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ6,949Apr 22, 2026Updated last week
- SGLang is a high-performance serving framework for large language models and multimodal models.β26,397Updated this week
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovationβ7,979May 15, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.β1,295Aug 28, 2025Updated 8 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asyβ¦β9,417Updated this week
- β3,473Mar 7, 2025Updated last year
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β985Feb 25, 2026Updated 2 months ago
- Helpful tools and examples for working with flex-attentionβ1,179Apr 13, 2026Updated 2 weeks ago
- Ongoing research training transformer models at scaleβ16,145Updated this week
- Ring attention implementation with flash attentionβ1,014Sep 10, 2025Updated 7 months ago
- Expert Parallelism Load Balancerβ1,363Mar 24, 2025Updated last year
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,632Apr 23, 2026Updated last week
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Simple RL training for reasoningβ3,849Dec 23, 2025Updated 4 months ago
- Democratizing Reinforcement Learning for LLMsβ5,447Apr 23, 2026Updated last week
- Fully open reproduction of DeepSeek-R1β26,004Apr 2, 2026Updated 3 weeks ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β250Sep 12, 2025Updated 7 months ago
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelismβ2,603Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β834Mar 6, 2025Updated last year
- Analyze computation-communication overlap in V3/R1.β1,152Mar 21, 2025Updated last year