fkodom / grouped-query-attention-pytorchLinks
(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆190Updated last year
Alternatives and similar repositories for grouped-query-attention-pytorch
Users that are interested in grouped-query-attention-pytorch are comparing it to the libraries listed below
Sorting:
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆339Updated 11 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆249Updated 2 years ago
- ☆201Updated 2 years ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆250Updated 10 months ago
- Rectified Rotary Position Embeddings☆387Updated last year
- ☆218Updated 2 months ago
- ☆235Updated last year
- Low-bit optimizers for PyTorch☆138Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Updated 2 years ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆429Updated 4 months ago
- Implementation of "Attention Is Off By One" by Evan Miller☆198Updated 2 years ago
- Implementation of FlashAttention in PyTorch☆180Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆365Updated 2 years ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆67Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆106Updated last year
- ☆273Updated 2 years ago
- Root Mean Square Layer Normalization☆261Updated 2 years ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆113Updated last week
- Code for paper "Patch-Level Training for Large Language Models"☆97Updated 2 months ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆113Updated 3 years ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆113Updated 2 years ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆71Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Updated last year
- ☆143Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆378Updated last year
- Lion and Adam optimization comparison☆64Updated 2 years ago
- ☆157Updated 2 years ago
- Experiments on Multi-Head Latent Attention☆99Updated last year