Accelerate LLM preference tuning via prefix sharing with a single line of code
☆52Jul 4, 2025Updated 9 months ago
Alternatives and similar repositories for flash-preference
Users that are interested in flash-preference are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆66Apr 26, 2025Updated last year
- ☆52May 19, 2025Updated 11 months ago
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆97Jan 16, 2026Updated 3 months ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆278Feb 2, 2026Updated 2 months ago
- A lightweight design for computation-communication overlap.☆227Jan 20, 2026Updated 3 months ago
- High-performance LLM operator library built on TileLang.☆111Updated this week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- Fast and memory-efficient exact attention☆21Apr 10, 2026Updated 2 weeks ago
- ☆44Oct 15, 2025Updated 6 months ago
- An experimental communicating attention kernel based on DeepEP.☆34Jul 29, 2025Updated 9 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- Debug print operator for cudagraph debugging☆15Aug 2, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆98Apr 23, 2026Updated last week
- ☆38Aug 7, 2025Updated 8 months ago
- ☆362Jan 28, 2026Updated 3 months ago
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 5 months ago
- Perplexity GPU Kernels☆569Nov 7, 2025Updated 5 months ago
- patches for huggingface transformers to save memory☆36Jun 2, 2025Updated 10 months ago
- ☆98May 31, 2025Updated 11 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 7 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 8 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆15Jan 16, 2026Updated 3 months ago
- libsmctrl论文的复现,添加了python端接口,可以在python端灵活调用接口来分配计算资源☆12May 21, 2024Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆180Feb 11, 2026Updated 2 months ago
- DeepSeek-V3/R1 inference performance simulator☆195Mar 27, 2025Updated last year
- NVIDIA cuTile learn☆167Dec 9, 2025Updated 4 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆795Apr 21, 2026Updated last week
- ☆186May 7, 2025Updated 11 months ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆32Jul 2, 2025Updated 9 months ago
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆75May 9, 2025Updated 11 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago
- ☆80Updated this week
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆56Apr 15, 2026Updated 2 weeks ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆186Mar 17, 2026Updated last month
- Unofficial description of the CUDA assembly (SASS) instruction sets.☆211Jul 18, 2025Updated 9 months ago