fkodom / grouped-query-attention-pytorchLinks
(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆173Updated last year
Alternatives and similar repositories for grouped-query-attention-pytorch
Users that are interested in grouped-query-attention-pytorch are comparing it to the libraries listed below
Sorting:
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆323Updated 5 months ago
- ☆196Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need☆335Updated 3 weeks ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆247Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆103Updated last week
- ☆223Updated last year
- ☆204Updated 9 months ago
- Low-bit optimizers for PyTorch☆130Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆167Updated last year
- Root Mean Square Layer Normalization☆249Updated 2 years ago
- Implementation of FlashAttention in PyTorch☆159Updated 6 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆338Updated 2 years ago
- Efficient Mixture of Experts for LLM Paper List☆87Updated 7 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆85Updated 7 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 8 months ago
- Rectified Rotary Position Embeddings☆375Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated last year
- ☆269Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆226Updated 4 months ago
- Experiments on Multi-Head Latent Attention☆93Updated 11 months ago
- ☆139Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆109Updated 3 years ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆431Updated 2 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆368Updated last month
- 🔥 A minimal training framework for scaling FLA models☆220Updated last month
- ☆106Updated last year
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆122Updated 6 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆81Updated last year