☆13Jan 7, 2025Updated last year
Alternatives and similar repositories for token-ring
Users that are interested in token-ring are comparing it to the libraries listed below
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated 2 weeks ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Sep 19, 2024Updated last year
- ☆22May 5, 2025Updated 9 months ago
- ☆44Updated this week
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- ☆20Dec 24, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- study of cutlass☆22Nov 10, 2024Updated last year
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- Stateful LLM Serving☆96Mar 11, 2025Updated 11 months ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆70Apr 14, 2025Updated 10 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated last month
- ☆155Mar 4, 2025Updated 11 months ago
- ☆21Mar 22, 2021Updated 4 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated last week
- ☆88May 31, 2025Updated 9 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆74Sep 15, 2025Updated 5 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- A lightweight design for computation-communication overlap.☆221Jan 20, 2026Updated last month
- Tile-based language built for AI computation across all scales☆138Updated this week
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆72Sep 8, 2024Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Feb 22, 2026Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year
- ☆34Feb 3, 2025Updated last year
- Optimize GEMM with tensorcore step by step☆36Dec 17, 2023Updated 2 years ago
- ☆71Mar 26, 2025Updated 11 months ago
- ☆53Updated this week
- qwen-nsa☆87Oct 14, 2025Updated 4 months ago
- 详细双语注释版word2vec源码,well-annotated word2vec☆10Oct 3, 2021Updated 4 years ago
- ☆27Dec 3, 2025Updated 2 months ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆55Jan 12, 2026Updated last month
- A TUI-based utility for real-time monitoring of InfiniBand traffic and performance metrics on the local node☆62Dec 19, 2025Updated 2 months ago
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- A domain-specific language (DSL) based on Triton but providing higher-level abstractions.☆41Feb 4, 2026Updated 3 weeks ago
- ring-attention experiments☆165Oct 17, 2024Updated last year
- ☆97Mar 26, 2025Updated 11 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆105Dec 17, 2025Updated 2 months ago
- This project is based on the [LTX-Video](https://github.com/Lightricks/LTX-Video) algorithm of the diffusers and optimized and accelerate…☆13Dec 31, 2024Updated last year