HazyResearch / hippo-codeLinks
☆190Updated last year
Alternatives and similar repositories for hippo-code
Users that are interested in hippo-code are comparing it to the libraries listed below
Sorting:
- ☆310Updated 11 months ago
- Sequence Modeling with Structured State Spaces☆66Updated 3 years ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆87Updated last year
- ☆164Updated 2 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆124Updated 3 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Gaussian-Bernoulli Restricted Boltzmann Machines☆106Updated 3 years ago
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆80Updated last year
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆72Updated 7 months ago
- Implementations of various linear RNN layers using pytorch and triton☆54Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- Package for working with hypernetworks in PyTorch.☆131Updated 2 years ago
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆60Updated 3 months ago
- Transformers with doubly stochastic attention☆50Updated 3 years ago
- Implementation of https://srush.github.io/annotated-s4☆507Updated 5 months ago
- Easy Hypernetworks in Pytorch and Jax☆106Updated 2 years ago
- Parallelizing non-linear sequential models over the sequence length☆56Updated 5 months ago
- Rational Activation Functions - Replacing Padé Activation Units☆102Updated 8 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆110Updated 4 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆223Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- Code for: "Neural Rough Differential Equations for Long Time Series", (ICML 2021)☆121Updated 4 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Updated 2 years ago
- Official implementation of Transformer Neural Processes☆78Updated 3 years ago
- Code for our paper "Generative Flow Networks for Discrete Probabilistic Modeling"☆85Updated 2 years ago
- Official Implementation of "Transformers Can Do Bayesian Inference", the PFN paper☆240Updated last year
- An implementation of local windowed attention for language modeling☆489Updated 4 months ago
- Official code for Coupled Oscillatory RNN (ICLR 2021, Oral)☆51Updated 4 years ago
- Sequence modeling with Mega.☆301Updated 2 years ago
- [ICLR'25] Artificial Kuramoto Oscillatory Neurons☆105Updated last month