exists-forall / striped_attention
☆36Updated last year
Related projects ⓘ
Alternatives and complementary repositories for striped_attention
- ☆88Updated 2 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆185Updated last month
- ☆55Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆57Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆53Updated 3 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆195Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆68Updated 4 months ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆61Updated last month
- extensible collectives library in triton☆72Updated last month
- Memory Optimizations for Deep Learning (ICML 2023)☆60Updated 8 months ago
- Sparsity support for PyTorch☆31Updated this week
- ☆22Updated 11 months ago
- ☆47Updated 2 months ago
- Code for Palu: Compressing KV-Cache with Low-Rank Projection☆57Updated this week
- ☆70Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆56Updated last month
- ☆23Updated 2 months ago
- ring-attention experiments☆97Updated last month
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆147Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆90Updated 4 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆32Updated 3 months ago
- ☆45Updated 2 weeks ago
- 16-fold memory access reduction with nearly no loss☆59Updated last week
- ☆132Updated last year
- GPTQ inference TVM kernel☆36Updated 6 months ago
- ☆188Updated 6 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last month