knotgrass / attention
several types of attention modules written in PyTorch for learning purposes
☆50Updated 6 months ago
Alternatives and similar repositories for attention:
Users that are interested in attention are comparing it to the libraries listed below
- Experiments on Multi-Head Latent Attention☆87Updated 8 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 3 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 8 months ago
- ☆48Updated last year
- ☆41Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆90Updated this week
- Playground for Transformers☆49Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 6 months ago
- A repository for DenseSSMs☆87Updated last year
- ☆16Updated 6 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆40Updated last year
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆51Updated 10 months ago
- code for the ddp tutorial☆32Updated 3 years ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆52Updated 3 weeks ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆165Updated 11 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆26Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆82Updated 11 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆24Updated 10 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 6 months ago
- Awesome Triton Resources☆26Updated 3 weeks ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated this week
- Train, tune, and infer Bamba model☆88Updated this week
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 3 years ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆81Updated 5 months ago
- PyTorch implementation of moe, which stands for mixture of experts☆43Updated 4 years ago
- The official GitHub page for the survey paper "A Survey of RWKV".☆25Updated 3 months ago
- ☆28Updated last year
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆78Updated 2 weeks ago