fkodom / grouped-query-attention-pytorchLinks
(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆185Updated last year
Alternatives and similar repositories for grouped-query-attention-pytorch
Users that are interested in grouped-query-attention-pytorch are comparing it to the libraries listed below
Sorting:
- Official implementation of TransNormerLLM: A Faster and Better LLM☆248Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆338Updated 9 months ago
- ☆235Updated last year
- Root Mean Square Layer Normalization☆258Updated 2 years ago
- Implementation of "Attention Is Off By One" by Evan Miller☆197Updated 2 years ago
- ☆200Updated 2 years ago
- Low-bit optimizers for PyTorch☆133Updated 2 years ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- ☆215Updated last week
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆413Updated 2 months ago
- Rectified Rotary Position Embeddings☆384Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆111Updated this week
- Lion and Adam optimization comparison☆64Updated 2 years ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆84Updated 2 years ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆68Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆112Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Implementation of FlashAttention in PyTorch☆175Updated 10 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆95Updated 3 weeks ago
- ☆157Updated 2 years ago
- ☆272Updated 2 years ago
- Experiments on Multi-Head Latent Attention☆99Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆240Updated 8 months ago
- Efficient Mixture of Experts for LLM Paper List☆145Updated 2 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆361Updated 2 years ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆66Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆108Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆99Updated 11 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆323Updated 9 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆167Updated last year