kyegomez / LiqudNetLinks
Implementation of Liquid Nets in Pytorch
☆67Updated 2 weeks ago
Alternatives and similar repositories for LiqudNet
Users that are interested in LiqudNet are comparing it to the libraries listed below
Sorting:
- Implementation of the proposed minGRU in Pytorch☆306Updated 7 months ago
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆176Updated last year
- Oscillatory State-Space Models☆103Updated last week
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆82Updated last year
- ☆128Updated 3 months ago
- Liquid Structural State-Space Models☆374Updated last year
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆132Updated last year
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆125Updated last year
- A HuggingFace compatible Small Language Model trainer.☆76Updated 9 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆91Updated 4 months ago
- Tutorial to implement Liquid Time-Constant Neural Network from scratch (eng\rus)☆34Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆134Updated 3 weeks ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆73Updated 2 weeks ago
- Pytorch (Lightning) implementation of the Mamba model☆31Updated 6 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆67Updated 3 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆94Updated 8 months ago
- Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch☆26Updated 2 weeks ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆79Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆208Updated 2 weeks ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆103Updated 2 weeks ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆85Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 10 months ago
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆118Updated last week
- Cuda implementation of Extended Long Short Term Memory (xLSTM) with C++ and PyTorch ports☆89Updated last year
- Experiments in Joint Embedding Predictive Architectures (JEPAs).☆43Updated last year
- ☆306Updated 10 months ago
- Trying out the Mamba architecture on small examples (cifar-10, shakespeare char level etc.)☆47Updated last year