kyegomez / FlashAttention20Links
Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels
☆113Updated 2 years ago
Alternatives and similar repositories for FlashAttention20
Users that are interested in FlashAttention20 are comparing it to the libraries listed below
Sorting:
- ☆157Updated 2 years ago
- Triton implementation of Flash Attention2.0☆47Updated 2 years ago
- Low-bit optimizers for PyTorch☆138Updated 2 years ago
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- ☆132Updated 8 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆220Updated 2 years ago
- Reorder-based post-training quantization for large language model☆198Updated 2 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Updated last year
- ☆61Updated 2 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆339Updated 11 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Updated 10 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆402Updated last year
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆199Updated last year
- QuIP quantization☆61Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- Experiments on Multi-Head Latent Attention☆99Updated last year
- ☆158Updated 11 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆141Updated 8 months ago
- ☆115Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆279Updated 3 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆110Updated 9 months ago
- ☆235Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆137Updated last year
- ☆160Updated 2 years ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆113Updated last week
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆154Updated 3 weeks ago
- ☆163Updated 7 months ago
- Implementation of FlashAttention in PyTorch☆180Updated last year