state-spaces / s4Links
Structured state space sequence models
☆2,725Updated last year
Alternatives and similar repositories for s4
Users that are interested in s4 are comparing it to the libraries listed below
Sorting:
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,319Updated 9 months ago
- Pytorch library for fast transformer implementations☆1,732Updated 2 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆751Updated last month
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,857Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆797Updated last year
- Implementation of https://srush.github.io/annotated-s4☆502Updated 3 months ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,205Updated last year
- Collection of papers on state-space models☆600Updated 2 weeks ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,254Updated last year
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,163Updated 9 months ago
- Vector (and Scalar) Quantization, in Pytorch☆3,561Updated 3 weeks ago
- Long Range Arena for Benchmarking Efficient Transformers☆763Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆3,341Updated this week
- maximal update parametrization (µP)☆1,599Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,575Updated last week
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,148Updated 3 years ago
- Awesome Papers related to Mamba.☆1,377Updated 11 months ago
- Schedule-Free Optimization in PyTorch☆2,209Updated 4 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆809Updated 2 years ago
- Official repository of the xLSTM.☆1,983Updated 2 weeks ago
- Foundation Architecture for (M)LLMs☆3,115Updated last year
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,171Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,181Updated 2 years ago
- Hopfield Networks is All You Need☆1,849Updated 2 years ago
- Implementation of Hinton's forward-forward (FF) algorithm - an alternative to back-propagation☆1,488Updated 2 years ago
- Mamba SSM architecture☆15,903Updated 2 weeks ago
- ☆302Updated 8 months ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆970Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,169Updated last year
- Convolutions for Sequence Modeling☆898Updated last year