tk-rusch / linossLinks
Oscillatory State-Space Models
☆108Updated last month
Alternatives and similar repositories for linoss
Users that are interested in linoss are comparing it to the libraries listed below
Sorting:
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- ☆310Updated 10 months ago
- ☆146Updated 3 weeks ago
- [ICLR'25] Artificial Kuramoto Oscillatory Neurons☆105Updated last month
- Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)☆81Updated last year
- ☆224Updated 11 months ago
- Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.…☆104Updated 10 months ago
- PyTorch Code for Energy-Based Transformers paper -- generalizable reasoning and scalable learning☆561Updated 2 weeks ago
- Scalable and Stable Parallelization of Nonlinear RNNS☆25Updated last month
- Patched Attention for Nonlinear Dynamics☆161Updated this week
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆85Updated last year
- Latent Program Network (from the "Searching Latent Program Spaces" paper)☆106Updated 2 months ago
- Code repository for Trajectory Flow Matching☆90Updated last year
- ☆34Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆72Updated 7 months ago
- Pytorch implementation of Evolutionary Policy Optimization, from Wang et al. of the Robotics Institute at Carnegie Mellon University☆104Updated 2 months ago
- A projection-based framework for gradient-free and parallel learning☆108Updated 5 months ago
- Implementation of the proposed minGRU in Pytorch☆308Updated 8 months ago
- An easy to use PyTorch implementation of the Kolmogorov Arnold Network and a few novel variations☆186Updated last year
- Kolmogorov–Arnold Networks with modified activation (using MLP to represent the activation)☆107Updated last month
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆174Updated 2 years ago
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆59Updated 2 months ago
- ☆157Updated 3 months ago
- We integrate discrete diffusion models with neurosymbolic predictors for scalable and calibrated learning and reasoning☆51Updated 2 weeks ago
- ☆129Updated 3 months ago
- Liquid Structural State-Space Models☆378Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆166Updated last month
- 📄Small Batch Size Training for Language Models☆63Updated last month
- Parallelizing non-linear sequential models over the sequence length☆55Updated 5 months ago