HazyResearch / hippo-codeLinks
☆192Updated last year
Alternatives and similar repositories for hippo-code
Users that are interested in hippo-code are comparing it to the libraries listed below
Sorting:
- ☆314Updated last year
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆87Updated last year
- Sequence Modeling with Structured State Spaces☆67Updated 3 years ago
- ☆163Updated 2 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆125Updated 3 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Gaussian-Bernoulli Restricted Boltzmann Machines☆106Updated 3 years ago
- Implementations of various linear RNN layers using pytorch and triton☆54Updated 2 years ago
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆82Updated last year
- Implementation of Block Recurrent Transformer - Pytorch☆223Updated last year
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆73Updated 8 months ago
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆61Updated 4 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆102Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆511Updated 6 months ago
- Package for working with hypernetworks in PyTorch.☆131Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆206Updated 2 years ago
- Code for: "Neural Rough Differential Equations for Long Time Series", (ICML 2021)☆122Updated 4 years ago
- Transformers with doubly stochastic attention☆52Updated 3 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- Implementation of Linformer for Pytorch☆304Updated 2 years ago
- Rational Activation Functions - Replacing Padé Activation Units☆103Updated 10 months ago
- Easy Hypernetworks in Pytorch and Jax☆106Updated 2 years ago
- Parallelizing non-linear sequential models over the sequence length☆56Updated 6 months ago
- Official Implementation of "Transformers Can Do Bayesian Inference", the PFN paper☆251Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- Sequence modeling with Mega.☆302Updated 2 years ago
- A PyTorch implementation of Legendre Memory Units (LMUs) and its FFT variant☆43Updated 4 years ago
- ☆66Updated 4 years ago
- Official implementation of Transformer Neural Processes☆78Updated 3 years ago