jloveric / high-order-layers-torchLinks
High order and sparse layers in pytorch. Lagrange Polynomial, Piecewise Lagrange Polynomial, Piecewise Discontinuous Lagrange Polynomial (Chebyshev nodes) and Fourier Series layers of arbitrary order. Piecewise implementations could be thought of as a 1d grid (for each neuron) where each grid element is Lagrange polynomial. Both full connecte…
☆45Updated last year
Alternatives and similar repositories for high-order-layers-torch
Users that are interested in high-order-layers-torch are comparing it to the libraries listed below
Sorting:
- A State-Space Model with Rational Transfer Function Representation.☆82Updated last year
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated last year
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆67Updated 3 months ago
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆51Updated last year
- Recursive Leasting Squares (RLS) with Neural Network for fast learning☆55Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- ☆97Updated last year
- Diffusion models in PyTorch☆112Updated last week
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆89Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 9 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 10 months ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆46Updated 2 years ago
- Kolmogorov–Arnold Networks with modified activation (using MLP to represent the activation)☆106Updated last month
- C++ and Cuda ops for fused FourierKAN☆81Updated last year
- TensorLy-Torch: Deep Tensor Learning with TensorLy and PyTorch☆81Updated last year
- Toy genetic algorithm in Pytorch☆52Updated 6 months ago
- Code for "Theoretical Foundations of Deep Selective State-Space Models" (NeurIPS 2024)☆15Updated 10 months ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆85Updated last year
- ☆34Updated last year
- Clifford-Steerable Convolutional Neural Networks [ICML'24]☆49Updated 6 months ago
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆29Updated last year
- Free-form flows are a generative model training a pair of neural networks via maximum likelihood☆49Updated 4 months ago
- A simple example of VAEs with KANs☆11Updated last year
- We integrate discrete diffusion models with neurosymbolic predictors for scalable and calibrated learning and reasoning☆51Updated last month
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆45Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 7 months ago
- ☆60Updated last year
- ☆33Updated last year