kyegomez / MGQALinks
The open source implementation of the multi grouped query attention by the paper "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints"
☆15Updated 2 years ago
Alternatives and similar repositories for MGQA
Users that are interested in MGQA are comparing it to the libraries listed below
Sorting:
- A simple reproducible template to implement AI research papers☆24Updated last year
- ☆191Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆113Updated last week
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- A repository for DenseSSMs☆88Updated last year
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆82Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆343Updated 10 months ago
- This repository contains the code for the paper "TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back)…☆13Updated 10 months ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated 2 weeks ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆122Updated last year
- Implementation of "PaLM2-VAdapter:" from the multi-modal model paper: "PaLM2-VAdapter: Progressively Aligned Language Model Makes a Stron…☆17Updated last year
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆27Updated 2 weeks ago
- Implementation of Qformer from BLIP2 in Zeta Lego blocks.☆47Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆120Updated 2 weeks ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆50Updated 2 years ago
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆31Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆40Updated last year
- Implementation of Agent Attention in Pytorch☆93Updated last year
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆98Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆97Updated 2 months ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆104Updated 2 years ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation☆70Updated 3 months ago
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR'24 Highlight]☆182Updated 9 months ago