fkodom / grouped-query-attention-pytorchLinks
(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆181Updated last year
Alternatives and similar repositories for grouped-query-attention-pytorch
Users that are interested in grouped-query-attention-pytorch are comparing it to the libraries listed below
Sorting:
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆330Updated 8 months ago
- Low-bit optimizers for PyTorch☆132Updated 2 years ago
- ☆197Updated last year
- ☆231Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆247Updated last year
- Implementation of "Attention Is Off By One" by Evan Miller☆196Updated 2 years ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆174Updated last year
- Root Mean Square Layer Normalization☆256Updated 2 years ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆353Updated 2 years ago
- Rectified Rotary Position Embeddings☆381Updated last year
- ☆213Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆389Updated last month
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Experiments on Multi-Head Latent Attention☆97Updated last year
- ☆271Updated last year
- Lion and Adam optimization comparison☆64Updated 2 years ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆108Updated last week
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆104Updated last year
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆84Updated 2 years ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆97Updated 10 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆108Updated 2 years ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- Efficient Mixture of Experts for LLM Paper List☆140Updated last month
- ☆140Updated last year
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆126Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆317Updated 7 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆205Updated 8 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆234Updated 7 months ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆61Updated 11 months ago