ag1988 / dssLinks
Sequence Modeling with Structured State Spaces
☆66Updated 3 years ago
Alternatives and similar repositories for dss
Users that are interested in dss are comparing it to the libraries listed below
Sorting:
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- ☆164Updated 2 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆126Updated last year
- ☆183Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- Transformers with doubly stochastic attention☆47Updated 3 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆123Updated 2 years ago
- Implementations of various linear RNN layers using pytorch and triton☆53Updated 2 years ago
- Official code for Long Expressive Memory (ICLR 2022, Spotlight)☆70Updated 3 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆205Updated 2 years ago
- Gaussian-Bernoulli Restricted Boltzmann Machines☆105Updated 2 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆82Updated last year
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆64Updated 3 years ago
- ☆32Updated last year
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆86Updated last year
- Easy Hypernetworks in Pytorch and Jax☆104Updated 2 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆61Updated 2 years ago
- Parallelizing non-linear sequential models over the sequence length☆54Updated 2 months ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆33Updated 2 years ago
- Layerwise Batch Entropy Regularization☆23Updated 3 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆58Updated last year
- Repository for the "Gotta Go Fast When Generating Data with Score-Based Models" paper☆105Updated 3 years ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆77Updated last year
- A minimalist implementation of score-based diffusion model☆129Updated 4 years ago