state-spaces / s4
Structured state space sequence models
☆2,553Updated 7 months ago
Alternatives and similar repositories for s4:
Users that are interested in s4 are comparing it to the libraries listed below
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,125Updated 2 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆635Updated 2 months ago
- Implementation of https://srush.github.io/annotated-s4☆483Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,717Updated 11 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆738Updated 9 months ago
- Vector (and Scalar) Quantization, in Pytorch☆2,922Updated last week
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,094Updated 2 months ago
- Pytorch library for fast transformer implementations☆1,677Updated last year
- Mamba SSM architecture☆14,009Updated last month
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,173Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,080Updated this week
- Schedule-Free Optimization in PyTorch☆2,098Updated 2 months ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,120Updated last year
- Awesome Papers related to Mamba.☆1,306Updated 4 months ago
- Collection of papers on state-space models☆575Updated 3 weeks ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,126Updated 7 months ago
- Long Range Arena for Benchmarking Efficient Transformers☆745Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆1,912Updated this week
- Foundation Architecture for (M)LLMs☆3,046Updated 10 months ago
- Official repository of the xLSTM.☆1,692Updated last month
- maximal update parametrization (µP)☆1,451Updated 7 months ago
- ☆280Updated last month
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,115Updated 3 years ago
- ☆764Updated last month
- Optax is a gradient processing and optimization library for JAX.☆1,802Updated this week
- JAX - A curated list of resources https://github.com/google/jax☆1,699Updated this week
- Annotated version of the Mamba paper☆473Updated 11 months ago
- Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)☆1,872Updated 6 months ago
- Official code for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)☆1,571Updated 2 years ago
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.☆2,463Updated last week