HazyResearch / hippo-codeLinks
☆191Updated last year
Alternatives and similar repositories for hippo-code
Users that are interested in hippo-code are comparing it to the libraries listed below
Sorting:
- ☆312Updated 11 months ago
- ☆164Updated 2 years ago
- Sequence Modeling with Structured State Spaces☆67Updated 3 years ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆87Updated last year
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆124Updated 3 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Implementations of various linear RNN layers using pytorch and triton☆54Updated 2 years ago
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆81Updated last year
- Gaussian-Bernoulli Restricted Boltzmann Machines☆106Updated 3 years ago
- Implementation of https://srush.github.io/annotated-s4☆510Updated 6 months ago
- Package for working with hypernetworks in PyTorch.☆131Updated 2 years ago
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆61Updated 3 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- Official implementation of Transformer Neural Processes☆78Updated 3 years ago
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆72Updated 8 months ago
- Easy Hypernetworks in Pytorch and Jax☆106Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆223Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- Official Implementation of "Transformers Can Do Bayesian Inference", the PFN paper☆244Updated last year
- Parallelizing non-linear sequential models over the sequence length☆56Updated 6 months ago
- Transformers with doubly stochastic attention☆51Updated 3 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- Rational Activation Functions - Replacing Padé Activation Units☆103Updated 9 months ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆206Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- A PyTorch implementation of Legendre Memory Units (LMUs) and its FFT variant☆43Updated 4 years ago
- ☆388Updated 2 years ago
- Implementation of Linformer for Pytorch☆303Updated last year
- Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization☆344Updated last year
- Code for: "Neural Rough Differential Equations for Long Time Series", (ICML 2021)☆121Updated 4 years ago