fkodom / grouped-query-attention-pytorchLinks
(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆180Updated last year
Alternatives and similar repositories for grouped-query-attention-pytorch
Users that are interested in grouped-query-attention-pytorch are comparing it to the libraries listed below
Sorting:
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆327Updated 7 months ago
- ☆197Updated last year
- ☆230Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆247Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆377Updated 2 weeks ago
- Implementation of "Attention Is Off By One" by Evan Miller☆197Updated 2 years ago
- Root Mean Square Layer Normalization☆254Updated 2 years ago
- Low-bit optimizers for PyTorch☆131Updated last year
- Rectified Rotary Position Embeddings☆381Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆172Updated last year
- ☆210Updated 11 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆101Updated last year
- Efficient Mixture of Experts for LLM Paper List☆131Updated last week
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆95Updated 9 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆232Updated 6 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆314Updated 7 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆84Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆60Updated 10 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆362Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆106Updated last week
- ☆269Updated last year
- qwen-nsa☆76Updated 5 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- Lion and Adam optimization comparison☆64Updated 2 years ago
- Official PyTorch implementation of QA-LoRA☆141Updated last year
- Experiments on Multi-Head Latent Attention☆96Updated last year
- Implementation of FlashAttention in PyTorch☆171Updated 8 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆88Updated 10 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year