sgl-project / sgl-flash-attnView external linksLinks
Fast and memory-efficient exact attention
☆18Jan 23, 2026Updated 3 weeks ago
Alternatives and similar repositories for sgl-flash-attn
Users that are interested in sgl-flash-attn are comparing it to the libraries listed below
Sorting:
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- ☆38Aug 7, 2025Updated 6 months ago
- DeeperGEMM: crazy optimized version☆73May 5, 2025Updated 9 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆46Jan 27, 2026Updated 2 weeks ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- 3x Faster Inference; Unofficial implementation of EAGLE Speculative Decoding☆83Jul 3, 2025Updated 7 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Oct 13, 2025Updated 4 months ago
- Fast and memory-efficient exact kmeans☆138Updated this week
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆122Jan 1, 2026Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆266Updated this week
- GenPiCam - a RaspberryPi based camera that reimagines the world with GenAI.☆10Jun 28, 2023Updated 2 years ago
- ☆14Jan 23, 2026Updated 3 weeks ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- ☆15Jan 27, 2026Updated 2 weeks ago
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆15Sep 7, 2024Updated last year
- VexFS is a Linux kernel-native file system with built-in vector search and semantic memory. Designed for AI agents, RAG, and LLM workload…☆24Oct 19, 2025Updated 3 months ago
- llama2 inference engine in Rust☆13Apr 12, 2024Updated last year
- ☆20Sep 11, 2025Updated 5 months ago
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Aug 6, 2025Updated 6 months ago
- Packer templates to install Windows 10 Evaluation using the qemu/kvm builder.☆12Sep 2, 2021Updated 4 years ago
- ☆29Nov 18, 2025Updated 2 months ago
- Blogs that I'm actively following.☆13Sep 17, 2023Updated 2 years ago
- Empowering LLM Agents for Real-World Computer System Optimization☆16Sep 10, 2025Updated 5 months ago
- DatasetResearch: Benchmarking Agent Systems for Demand-Driven Dataset Discovery☆20Sep 24, 2025Updated 4 months ago
- A standalone CXL-enabled system simulator.☆18Jan 10, 2026Updated last month
- Boosting GPU utilization for LLM serving via dynamic spatial-temporal prefill & decode orchestration☆33Jan 8, 2026Updated last month
- [ACL 2025] Official implementation of the "CoT-ICL Lab" framework☆11Oct 10, 2025Updated 4 months ago
- 使用torch.distributed实现DP/TP/PP☆12Dec 28, 2023Updated 2 years ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆49Jun 19, 2024Updated last year
- T22_034_han_shi_hao_CRDDC_2022_SourceCode☆11Dec 29, 2023Updated 2 years ago
- code for DOMI☆11Mar 24, 2023Updated 2 years ago
- ☆11Dec 11, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- A std::execution style runtime context and High Performance RPC Transport for using OpenUCX. Including CUDA/ROCM/... devices with RDMA.☆29Updated this week
- ☆13Jul 23, 2025Updated 6 months ago
- /j f t/ - YAML file tool☆13Apr 5, 2025Updated 10 months ago
- Blog for dataclouds@thoughtworks.☆10Jun 19, 2016Updated 9 years ago