state-spaces / s4Links
Structured state space sequence models
☆2,818Updated last year
Alternatives and similar repositories for s4
Users that are interested in s4 are comparing it to the libraries listed below
Sorting:
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,912Updated last year
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,405Updated last year
- Pytorch library for fast transformer implementations☆1,757Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆511Updated 7 months ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,781Updated 2 weeks ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆823Updated last year
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,192Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,182Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆790Updated 5 months ago
- Vector (and Scalar) Quantization, in Pytorch☆3,825Updated last week
- Collection of papers on state-space models☆615Updated 2 months ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,173Updated 3 years ago
- maximal update parametrization (µP)☆1,657Updated last year
- Reformer, the efficient Transformer, in Pytorch☆2,193Updated 2 years ago
- Schedule-Free Optimization in PyTorch☆2,254Updated 8 months ago
- Long Range Arena for Benchmarking Efficient Transformers☆773Updated 2 years ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆841Updated 2 years ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,212Updated 2 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,218Updated last year
- Foundation Architecture for (M)LLMs☆3,133Updated last year
- Official repository of the xLSTM.☆2,091Updated 2 months ago
- A pytorch implementation of the vector quantized variational autoencoder (https://arxiv.org/abs/1711.00937)☆870Updated 3 years ago
- TorchCFM: a Conditional Flow Matching library☆2,242Updated 2 months ago
- Mamba SSM architecture☆16,979Updated last week
- Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)☆1,978Updated last year
- ☆314Updated last year
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,358Updated 2 weeks ago
- A PyTorch library for implementing flow matching algorithms, featuring continuous and discrete flow matching implementations. It includes…☆4,008Updated 2 weeks ago
- ☆794Updated last week
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆615Updated last year