HazyResearch / hippo-code
☆175Updated 9 months ago
Alternatives and similar repositories for hippo-code:
Users that are interested in hippo-code are comparing it to the libraries listed below
- Sequence Modeling with Structured State Spaces☆63Updated 2 years ago
- ☆286Updated 2 months ago
- ☆164Updated 2 years ago
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆75Updated 10 months ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆77Updated last year
- Implementation of https://srush.github.io/annotated-s4☆485Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆218Updated 7 months ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆119Updated 2 years ago
- Sequence modeling with Mega.☆295Updated 2 years ago
- Gaussian-Bernoulli Restricted Boltzmann Machines☆102Updated 2 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆122Updated last year
- Implementations of various linear RNN layers using pytorch and triton☆50Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆99Updated 2 years ago
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆52Updated 4 months ago
- Official Implementation of "Transformers Can Do Bayesian Inference", the PFN paper☆207Updated 4 months ago
- Implementation of Linformer for Pytorch☆274Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆68Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 2 years ago
- Parallelizing non-linear sequential models over the sequence length☆51Updated 2 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆751Updated 10 months ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆113Updated 4 years ago
- Hierarchical Associative Memory User Experience☆100Updated last year
- Modern Fixed Point Systems using Pytorch☆88Updated last year
- Package for working with hypernetworks in PyTorch.☆122Updated last year
- ☆169Updated 3 months ago
- ☆60Updated 3 years ago
- ☆125Updated last year
- Transformers with doubly stochastic attention☆45Updated 2 years ago
- Accelerated First Order Parallel Associative Scan☆175Updated 7 months ago