state-spaces / s4Links
Structured state space sequence models
☆2,703Updated last year
Alternatives and similar repositories for s4
Users that are interested in s4 are comparing it to the libraries listed below
Sorting:
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,304Updated 8 months ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,845Updated last year
- Implementation of https://srush.github.io/annotated-s4☆500Updated last month
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆727Updated 2 weeks ago
- Pytorch library for fast transformer implementations☆1,725Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆793Updated last year
- maximal update parametrization (µP)☆1,579Updated last year
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,201Updated last year
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,153Updated 8 months ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,143Updated 3 years ago
- Vector (and Scalar) Quantization, in Pytorch☆3,468Updated 3 weeks ago
- Long Range Arena for Benchmarking Efficient Transformers☆762Updated last year
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,241Updated last year
- Schedule-Free Optimization in PyTorch☆2,201Updated 2 months ago
- Official repository of the xLSTM.☆1,952Updated 2 months ago
- Foundation Architecture for (M)LLMs☆3,101Updated last year
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆788Updated last year
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,165Updated last year
- Collection of papers on state-space models☆595Updated 3 months ago
- Hopfield Networks is All You Need☆1,836Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,178Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,505Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆3,017Updated this week
- ☆783Updated 2 months ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,151Updated last year
- Closed-form Continuous-time Neural Networks☆968Updated last year
- ☆298Updated 7 months ago
- An implementation of local windowed attention for language modeling☆471Updated 3 weeks ago
- Convolutions for Sequence Modeling☆895Updated last year
- Liquid Structural State-Space Models☆366Updated last year