fkodom / grouped-query-attention-pytorch
(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆133Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for grouped-query-attention-pytorch
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆138Updated 2 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 6 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆184Updated 6 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆275Updated last year
- Low-bit optimizers for PyTorch☆119Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆230Updated 6 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆229Updated 9 months ago
- ☆199Updated 5 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆293Updated 5 months ago
- ☆247Updated last year
- ☆154Updated last month
- Explorations into some recent techniques surrounding speculative decoding☆211Updated last year
- ☆181Updated 11 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆71Updated this week
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆254Updated 2 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆37Updated 10 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆246Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆265Updated this week
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆134Updated 5 months ago
- ☆134Updated last year
- ☆76Updated 4 months ago
- ☆116Updated 3 months ago
- ☆188Updated 6 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆33Updated 5 months ago
- Rectified Rotary Position Embeddings☆341Updated 6 months ago
- Awesome list for LLM pruning.☆167Updated this week
- Ring attention implementation with flash attention☆585Updated last week
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆384Updated 6 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆357Updated this week
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆144Updated 5 months ago