ag1988 / dssLinks
Sequence Modeling with Structured State Spaces
☆67Updated 3 years ago
Alternatives and similar repositories for dss
Users that are interested in dss are comparing it to the libraries listed below
Sorting:
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆102Updated 2 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- ☆164Updated 2 years ago
- ☆192Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆206Updated 2 years ago
- Implementations of various linear RNN layers using pytorch and triton☆54Updated 2 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆125Updated 3 years ago
- Transformers with doubly stochastic attention☆51Updated 3 years ago
- Official code for Long Expressive Memory (ICLR 2022, Spotlight)☆71Updated 3 years ago
- Easy Hypernetworks in Pytorch and Jax☆106Updated 2 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Updated 2 months ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆87Updated last year
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- ☆32Updated 2 years ago
- ☆62Updated last year
- ☆35Updated last year
- Blog post☆17Updated last year
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆59Updated 2 years ago
- ☆31Updated 4 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆65Updated 3 years ago
- The accompanying code for "Simplifying and Understanding State Space Models with Diagonal Linear RNNs" (Ankit Gupta, Harsh Mehta, Jonatha…☆23Updated 3 years ago
- Experiments on the impact of depth in transformers and SSMs.☆40Updated 2 months ago
- ☆40Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆223Updated last year
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆39Updated 3 years ago