Jamie-Stirling / RetNet
An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"
☆1,175Updated last year
Alternatives and similar repositories for RetNet:
Users that are interested in RetNet are comparing it to the libraries listed below
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆225Updated 10 months ago
- Meta-Transformer for Unified Multimodal Learning☆1,562Updated last year
- Foundation Architecture for (M)LLMs☆3,039Updated 9 months ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,087Updated 2 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆634Updated last month
- Structured state space sequence models☆2,541Updated 6 months ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,112Updated last month
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆945Updated last year
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,110Updated 6 months ago
- Collection of papers on state-space models☆572Updated this week
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,188Updated 2 months ago
- Awesome Papers related to Mamba.☆1,292Updated 3 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,076Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆619Updated 2 months ago
- LOMO: LOw-Memory Optimization☆978Updated 6 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,527Updated this week
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,700Updated 10 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆731Updated 8 months ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆697Updated last year
- ☆714Updated 8 months ago
- Vector (and Scalar) Quantization, in Pytorch☆2,863Updated this week
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆545Updated last month
- Code for CRATE (Coding RAte reduction TransformEr).☆1,205Updated 3 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆403Updated 3 weeks ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,027Updated 9 months ago
- Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks, out of Tsinghua / Ant group☆470Updated last month
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆671Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,427Updated 10 months ago
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆378Updated 7 months ago
- An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).☆4,202Updated 5 months ago