flagos-ai / FlagCXLinks
FlagCX is a scalable and adaptive cross-chip communication library.
☆138Updated this week
Alternatives and similar repositories for FlagCX
Users that are interested in FlagCX are comparing it to the libraries listed below
Sorting:
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆141Updated 7 months ago
- A lightweight design for computation-communication overlap.☆207Updated last week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆287Updated 4 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆89Updated last week
- ☆152Updated 11 months ago
- ☆112Updated 7 months ago
- High performance Transformer implementation in C++.☆147Updated 11 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆242Updated last month
- Fast and memory-efficient exact attention☆106Updated 3 weeks ago
- ☆104Updated last year
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆97Updated last week
- ☆33Updated 11 months ago
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆151Updated 3 months ago
- ☆96Updated 9 months ago
- Venus Collective Communication Library, supported by SII and Infrawaves.☆129Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- ☆130Updated last year
- DeepSeek-V3/R1 inference performance simulator☆175Updated 9 months ago
- ☆337Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 10 months ago
- ☆77Updated last year
- ☆60Updated last year
- ☆153Updated 10 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆63Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆191Updated 11 months ago
- ☆158Updated last month
- ☆141Updated last year