Efficient Long-context Language Model Training by Core Attention Disaggregation
☆96Mar 5, 2026Updated 2 weeks ago
Alternatives and similar repositories for DistCA
Users that are interested in DistCA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆60Oct 27, 2025Updated 4 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆93Jan 26, 2026Updated last month
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆75Nov 4, 2024Updated last year
- ☆52May 19, 2025Updated 10 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆49Jan 21, 2026Updated 2 months ago
- Expert Specialization MoE Solution based on CUTLASS☆27Jan 19, 2026Updated 2 months ago
- ☆234Nov 19, 2025Updated 4 months ago
- A fast text search engine built for SSDs, written in C++.☆11Aug 29, 2022Updated 3 years ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆42May 13, 2025Updated 10 months ago
- Triton-based Symmetric Memory operators and examples☆94Jan 15, 2026Updated 2 months ago
- ☆119May 19, 2025Updated 10 months ago
- d3LLM: Ultra-Fast Diffusion LLM 🚀☆110Mar 15, 2026Updated last week
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆81Oct 15, 2025Updated 5 months ago
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- ☆16Feb 24, 2026Updated last month
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆413Mar 3, 2026Updated 3 weeks ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 5 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆170Feb 11, 2026Updated last month
- ☆15Sep 22, 2024Updated last year
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆19May 28, 2024Updated last year
- Sequence-level 1F1B schedule for LLMs.☆38Aug 26, 2025Updated 6 months ago
- Perplexity open source garden for inference technology☆382Dec 25, 2025Updated 3 months ago
- ☆152Oct 9, 2024Updated last year
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆116Mar 7, 2026Updated 2 weeks ago
- How to plot for papers, slides, demos, etc.☆10Apr 7, 2022Updated 3 years ago
- Multi-Turn RL Training System with AgentTrainer for Language Model Game Reinforcement Learning☆60Dec 18, 2025Updated 3 months ago
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆687Mar 8, 2026Updated 2 weeks ago
- Quartet II Official Code☆61Updated this week
- ☆17Jan 29, 2026Updated last month
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆224May 31, 2025Updated 9 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- Accelerating MoE with IO and Tile-aware Optimizations☆613Mar 17, 2026Updated last week