ShaderManager / RetNet
PyTorch implementation of Retentive Network: A Successor to Transformer for Large Language Models
☆15Updated last year
Alternatives and similar repositories for RetNet:
Users that are interested in RetNet are comparing it to the libraries listed below
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- an implementation of paper"Retentive Network: A Successor to Transformer for Large Language Models" https://arxiv.org/pdf/2307.08621.pdf☆12Updated last year
- A repository for DenseSSMs☆87Updated 11 months ago
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆41Updated last month
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆101Updated 2 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆52Updated 2 months ago
- My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing o…☆43Updated 3 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 7 months ago
- The official GitHub page for the survey paper "A Survey of RWKV".☆25Updated 2 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11Updated 10 months ago
- Implementation of MambaFormer in Pytorch ++ Zeta from the paper: "Can Mamba Learn How to Learn? A Comparative Study on In-Context Learnin…☆20Updated 2 weeks ago
- ☆24Updated 5 months ago
- Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch☆22Updated 2 months ago
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆33Updated 4 months ago
- Playground for Transformers☆48Updated last year
- Implementation of Agent Attention in Pytorch☆90Updated 8 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆86Updated 3 weeks ago
- several types of attention modules written in PyTorch for learning purposes☆48Updated 6 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 5 months ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆24Updated 2 weeks ago
- Code repository for Black Mamba☆243Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- Trying out the Mamba architecture on small examples (cifar-10, shakespeare char level etc.)☆44Updated last year
- ☆12Updated 6 months ago
- Lion and Adam optimization comparison☆60Updated 2 years ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆165Updated 2 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆190Updated 2 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated 11 months ago
- State Space Models☆67Updated 11 months ago