xinhao-luo / ClusterFusionView external linksLinks
[NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive
☆66Dec 11, 2025Updated 2 months ago
Alternatives and similar repositories for ClusterFusion
Users that are interested in ClusterFusion are comparing it to the libraries listed below
Sorting:
- ☆18Mar 4, 2025Updated 11 months ago
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆57Feb 2, 2026Updated last week
- ☆36Dec 9, 2025Updated 2 months ago
- a simple API to use CUPTI☆11Aug 19, 2025Updated 5 months ago
- libsmctrl论文的复现,添加了python端接口,可以在python端灵活调用接口来分配计算资源☆12May 21, 2024Updated last year
- Boosting GPU utilization for LLM serving via dynamic spatial-temporal prefill & decode orchestration☆33Jan 8, 2026Updated last month
- ☆12Jun 29, 2024Updated last year
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆34Feb 10, 2025Updated last year
- A lightweight design for computation-communication overlap.☆219Jan 20, 2026Updated 3 weeks ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆41May 13, 2025Updated 9 months ago
- ☆221Nov 19, 2025Updated 2 months ago
- ☆15Jun 26, 2024Updated last year
- DeeperGEMM: crazy optimized version☆73May 5, 2025Updated 9 months ago
- ☆18Apr 21, 2024Updated last year
- ☆41Oct 15, 2025Updated 3 months ago
- ☆131Nov 11, 2024Updated last year
- APEX+ is an LLM Serving Simulator☆42Jun 16, 2025Updated 7 months ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆191Feb 7, 2026Updated last week
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆86Sep 11, 2025Updated 5 months ago
- ☆145Dec 19, 2025Updated last month
- tutorials about polyhedral compilation.☆62Updated this week
- High Performance KV Cache Store for LLM☆45Feb 7, 2026Updated last week
- Pipeline Parallelism Emulation and Visualization☆77Jan 8, 2026Updated last month
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- Pin based tool for simulation of rack-scale disaggregated memory systems☆32Mar 8, 2025Updated 11 months ago
- A low-latency & high-throughput serving engine for LLMs☆470Jan 8, 2026Updated last month
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆30Jun 14, 2024Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Jun 5, 2024Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Nov 18, 2024Updated last year
- Codes & examples for "CUDA - From Correctness to Performance"☆123Oct 24, 2024Updated last year
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated 3 weeks ago
- ☆32Jul 17, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆410Updated this week
- ☆54May 5, 2025Updated 9 months ago
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆114Jun 14, 2025Updated 8 months ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆49Jun 17, 2025Updated 7 months ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆53Jan 12, 2026Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆945Oct 29, 2025Updated 3 months ago