Accelerate LLM preference tuning via prefix sharing with a single line of code
☆51Jul 4, 2025Updated 9 months ago
Alternatives and similar repositories for flash-preference
Users that are interested in flash-preference are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆65Apr 26, 2025Updated 11 months ago
- ☆52May 19, 2025Updated 10 months ago
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 25, 2026Updated 2 weeks ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆95Jan 16, 2026Updated 2 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆275Feb 2, 2026Updated 2 months ago
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- High-performance LLM operator library built on TileLang.☆98Updated this week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 9 months ago
- Fast and memory-efficient exact attention☆20Mar 13, 2026Updated 3 weeks ago
- ☆44Oct 15, 2025Updated 5 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆95Mar 31, 2026Updated last week
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- ☆38Aug 7, 2025Updated 8 months ago
- ☆359Jan 28, 2026Updated 2 months ago
- Implement Flash Attention using Cute.☆105Dec 17, 2024Updated last year
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 5 months ago
- patches for huggingface transformers to save memory☆36Jun 2, 2025Updated 10 months ago
- ☆96May 31, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 7 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated 2 months ago
- libsmctrl论文的复现,添加了python端接口,可以在python端灵活调用接口来分配计算资源☆12May 21, 2024Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆174Feb 11, 2026Updated last month
- DeepSeek-V3/R1 inference performance simulator☆193Mar 27, 2025Updated last year
- NVIDIA cuTile learn☆166Dec 9, 2025Updated 4 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆765Updated this week
- ☆186May 7, 2025Updated 11 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆32Jul 2, 2025Updated 9 months ago
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆74May 9, 2025Updated 11 months ago
- ☆62Apr 3, 2026Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 11 months ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆56Mar 31, 2026Updated last week
- Look Back to Reason Forward: Revisitable Memory for Long-Context LLM Agents☆31Mar 9, 2026Updated last month
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆182Mar 17, 2026Updated 3 weeks ago