Efficient Long-context Language Model Training by Core Attention Disaggregation
☆97Apr 7, 2026Updated last week
Alternatives and similar repositories for DistCA
Users that are interested in DistCA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆60Oct 27, 2025Updated 5 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆95Apr 6, 2026Updated last week
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆78Nov 4, 2024Updated last year
- ☆65Apr 26, 2025Updated 11 months ago
- ☆51May 19, 2025Updated 10 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- ☆24May 9, 2025Updated 11 months ago
- Expert Specialization MoE Solution based on CUTLASS☆26Jan 19, 2026Updated 2 months ago
- ☆239Nov 19, 2025Updated 4 months ago
- A fast text search engine built for SSDs, written in C++.☆11Aug 29, 2022Updated 3 years ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆44May 13, 2025Updated 11 months ago
- Triton-based Symmetric Memory operators and examples☆97Mar 28, 2026Updated 2 weeks ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆51Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆119May 19, 2025Updated 10 months ago
- d3LLM: Ultra-Fast Diffusion LLM 🚀☆115Mar 19, 2026Updated 3 weeks ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆81Oct 15, 2025Updated 6 months ago
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- ☆17Updated this week
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆421Mar 28, 2026Updated 2 weeks ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 6 months ago
- ☆15Sep 22, 2024Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆174Feb 11, 2026Updated 2 months ago
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆19May 28, 2024Updated last year
- Sequence-level 1F1B schedule for LLMs.☆38Aug 26, 2025Updated 7 months ago
- Perplexity open source garden for inference technology☆390Dec 25, 2025Updated 3 months ago
- ☆156Oct 9, 2024Updated last year
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- A lightweight design for computation-communication overlap.☆226Jan 20, 2026Updated 2 months ago
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆117Mar 7, 2026Updated last month
- How to plot for papers, slides, demos, etc.☆10Apr 7, 2022Updated 4 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Multi-Turn RL Training System with AgentTrainer for Language Model Game Reinforcement Learning☆61Dec 18, 2025Updated 3 months ago
- ☆19Jan 29, 2026Updated 2 months ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆704Mar 8, 2026Updated last month
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆227May 31, 2025Updated 10 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆257Feb 13, 2026Updated 2 months ago