knotgrass / attentionLinks
several types of attention modules written in PyTorch for learning purposes
☆52Updated last year
Alternatives and similar repositories for attention
Users that are interested in attention are comparing it to the libraries listed below
Sorting:
- Playground for Transformers☆53Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆182Updated last year
- Implementation of Infini-Transformer in Pytorch☆113Updated 10 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated last year
- ☆293Updated 11 months ago
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆92Updated last month
- PyTorch implementation of moe, which stands for mixture of experts☆51Updated 4 years ago
- Experiments on Multi-Head Latent Attention☆98Updated last year
- ☆134Updated 2 years ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆115Updated last month
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆78Updated 2 years ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 3 weeks ago
- ☆42Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆195Updated 3 weeks ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆83Updated last year
- A repository for DenseSSMs☆89Updated last year
- We study toy models of skill learning.☆31Updated 10 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- Implementation of Agent Attention in Pytorch☆92Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆110Updated this week
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆37Updated last year
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆111Updated 2 years ago
- Root Mean Square Layer Normalization☆256Updated 2 years ago
- Timm model explorer☆42Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- ☆76Updated 9 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆210Updated last month