Jamie-Stirling / RetNetLinks
An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"
☆1,205Updated last year
Alternatives and similar repositories for RetNet
Users that are interested in RetNet are comparing it to the libraries listed below
Sorting:
- Foundation Architecture for (M)LLMs☆3,115Updated last year
- Meta-Transformer for Unified Multimodal Learning☆1,629Updated last year
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,319Updated 9 months ago
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Updated last year
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,857Updated last year
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,254Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆751Updated last month
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆970Updated last year
- Structured state space sequence models☆2,725Updated last year
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆711Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆652Updated 8 months ago
- Build high-performance AI models with modular building blocks☆550Updated last week
- Code for CRATE (Coding RAte reduction TransformEr).☆1,243Updated 11 months ago
- Collection of papers on state-space models☆600Updated 2 weeks ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆809Updated 2 years ago
- Convolutions for Sequence Modeling☆898Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,131Updated last year
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,163Updated 9 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆932Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,169Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆797Updated last year
- ☆740Updated last year
- LOMO: LOw-Memory Optimization☆989Updated last year
- Code release for DynamicTanh (DyT)☆1,012Updated 5 months ago
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆587Updated 3 weeks ago
- Awesome Papers related to Mamba.☆1,377Updated 11 months ago
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆422Updated last year
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆572Updated 7 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆361Updated last year
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,145Updated last year