state-spaces / s4Links
Structured state space sequence models
☆2,794Updated last year
Alternatives and similar repositories for s4
Users that are interested in s4 are comparing it to the libraries listed below
Sorting:
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,384Updated last year
- Pytorch library for fast transformer implementations☆1,755Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,901Updated last year
- Implementation of https://srush.github.io/annotated-s4☆507Updated 5 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆782Updated 4 months ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,210Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆815Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,168Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆769Updated last year
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,177Updated last year
- maximal update parametrization (µP)☆1,638Updated last year
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,183Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,191Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,710Updated last month
- Hopfield Networks is All You Need☆1,882Updated 2 years ago
- Foundation Architecture for (M)LLMs☆3,125Updated last year
- Official repository of the xLSTM.☆2,059Updated last month
- Vector (and Scalar) Quantization, in Pytorch☆3,756Updated this week
- Schedule-Free Optimization in PyTorch☆2,237Updated 6 months ago
- Convolutions for Sequence Modeling☆908Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,210Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,049Updated this week
- An All-MLP solution for Vision, from Google AI☆1,053Updated 5 months ago
- ☆312Updated 11 months ago
- ☆789Updated 3 weeks ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,675Updated 2 weeks ago
- An implementation of local windowed attention for language modeling☆489Updated 4 months ago
- Mamba SSM architecture☆16,646Updated last month
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆980Updated last year
- TorchCFM: a Conditional Flow Matching library☆2,174Updated last month