(Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (https://arxiv.org/pdf/2305.13245.pdf)
☆190May 9, 2024Updated last year
Alternatives and similar repositories for grouped-query-attention-pytorch
Users that are interested in grouped-query-attention-pytorch are comparing it to the libraries listed below
Sorting:
- The open source implementation of the multi grouped query attention by the paper "GQA: Training Generalized Multi-Query Transformer Model…☆15Dec 11, 2023Updated 2 years ago
- ☆20Oct 25, 2022Updated 3 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Nov 24, 2023Updated 2 years ago
- ACT-Bench – We Evaluate Action-Fidelity of World Models for Autonomous Driving☆28Dec 23, 2024Updated last year
- ☆13Jan 22, 2025Updated last year
- ☆20May 30, 2024Updated last year
- Fast and memory-efficient exact attention☆22,832Updated this week
- ☆10Dec 28, 2023Updated 2 years ago
- Time Series Representation Models☆13Jul 17, 2025Updated 8 months ago
- ☆32Oct 30, 2023Updated 2 years ago
- A spoken version of the textual story cloze benchmark☆20Aug 6, 2023Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆59Apr 20, 2024Updated last year
- Variance discrepancy representation☆15May 25, 2024Updated last year
- Leveraging BERT to Improve Spoken Language Identification☆17Nov 22, 2022Updated 3 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- RoFormer V1 & V2 pytorch☆522May 18, 2022Updated 3 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- ☆48Mar 31, 2024Updated last year
- Checkpointable dataset utilities for foundation model training☆32Jan 29, 2024Updated 2 years ago
- ☆12Nov 15, 2022Updated 3 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Apr 30, 2024Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- 📥 🎯 (1,4/4) an MLIR-based toolchain with Vitis HLS LLVM input/output targeting FPGAs.☆14Nov 15, 2022Updated 3 years ago
- This is a repo covers ai research papers pseudocodes☆17Jun 20, 2023Updated 2 years ago
- Speech-To-Text forced-alignment Speech processing Universal PERformance Benchmark☆36May 7, 2025Updated 10 months ago
- ☆24Sep 25, 2024Updated last year
- Codebase for the paper "Schema-guided User Satisfaction Modeling for Task-oriented Dialogues"☆11Aug 6, 2025Updated 7 months ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- ☆14Oct 3, 2024Updated last year
- [ICLR 2026] Data Pipeline, Models, and Benchmark for Omni-Captioner.☆118Oct 17, 2025Updated 5 months ago
- (WIP)long form speech generatoins☆31Apr 2, 2025Updated 11 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- faster inference☆28Jan 20, 2025Updated last year
- ☆20Oct 4, 2024Updated last year
- Compute WER and SER for speech recognition evaluation☆27Updated this week
- Source code for Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers☆18May 29, 2024Updated last year
- [NeurIPS 2024] Official Implementation of "SDformer: Similarity-driven Discrete Transformer For Time Series Generation"☆13May 23, 2025Updated 9 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆50Jun 16, 2023Updated 2 years ago