Jamie-Stirling / RetNet
An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"
☆1,179Updated last year
Alternatives and similar repositories for RetNet:
Users that are interested in RetNet are comparing it to the libraries listed below
- Foundation Architecture for (M)LLMs☆3,067Updated 11 months ago
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆225Updated last year
- Meta-Transformer for Unified Multimodal Learning☆1,582Updated last year
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,191Updated 4 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆723Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆640Updated 3 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆656Updated 4 months ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,117Updated 4 months ago
- LOMO: LOw-Memory Optimization☆982Updated 9 months ago
- Build high-performance AI models with modular building blocks☆492Updated this week
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆955Updated last year
- Collection of papers on state-space models☆585Updated last month
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,763Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,084Updated 11 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆325Updated 9 months ago
- Awesome Papers related to Mamba.☆1,337Updated 5 months ago
- Structured state space sequence models☆2,598Updated 8 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,099Updated last year
- ☆289Updated 3 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model To…☆1,022Updated 6 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆549Updated 3 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,195Updated this week
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,031Updated 9 months ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆702Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆987Updated 5 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,149Updated 8 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆542Updated last month
- A simple and effective LLM pruning approach.☆731Updated 7 months ago
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,342Updated last month