fkodom / grouped-query-attention-pytorch
(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆165Updated 11 months ago
Alternatives and similar repositories for grouped-query-attention-pytorch:
Users that are interested in grouped-query-attention-pytorch are comparing it to the libraries listed below
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆282Updated 2 months ago
- ☆194Updated 6 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆91Updated last week
- Official implementation of TransNormerLLM: A Faster and Better LLM☆243Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need☆243Updated this week
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆158Updated 10 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated last year
- Low-bit optimizers for PyTorch☆128Updated last year
- 🔥 A minimal training framework for scaling FLA models☆119Updated this week
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆195Updated 5 months ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆122Updated last year
- ☆220Updated 10 months ago
- ☆189Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆84Updated 5 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆154Updated 10 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆287Updated last month
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆320Updated last year
- ☆132Updated 9 months ago
- ☆256Updated last year
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆328Updated 10 months ago
- Rectified Rotary Position Embeddings☆367Updated 11 months ago
- ☆147Updated last year
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆242Updated 3 months ago
- ☆100Updated 10 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 11 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆327Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆284Updated 2 months ago
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆100Updated last month
- ☆103Updated last year