jloveric / high-order-layers-torchLinks
High order and sparse layers in pytorch. Lagrange Polynomial, Piecewise Lagrange Polynomial, Piecewise Discontinuous Lagrange Polynomial (Chebyshev nodes) and Fourier Series layers of arbitrary order. Piecewise implementations could be thought of as a 1d grid (for each neuron) where each grid element is Lagrange polynomial. Both full connecte…
☆45Updated last year
Alternatives and similar repositories for high-order-layers-torch
Users that are interested in high-order-layers-torch are comparing it to the libraries listed below
Sorting:
- A State-Space Model with Rational Transfer Function Representation.☆79Updated last year
- Diffusion models in PyTorch☆107Updated 2 months ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆126Updated last year
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- C++ and Cuda ops for fused FourierKAN☆80Updated last year
- Recursive Leasting Squares (RLS) with Neural Network for fast learning☆56Updated last year
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆50Updated last year
- ☆57Updated 11 months ago
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆29Updated last year
- Official Implementation of the ICML 2023 paper: "Neural Wave Machines: Learning Spatiotemporally Structured Representations with Locally …☆73Updated 2 years ago
- Kolmogorov-Arnold Networks with various basis functions like B-Splines, Fourier, Chebyshev, Wavelets etc☆35Updated last year
- ☆34Updated last year
- Implicit Convolutional Kernels for Steerable CNNs [NeurIPS'23]☆29Updated 6 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆65Updated 3 weeks ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆84Updated last year
- Clifford-Steerable Convolutional Neural Networks [ICML'24]☆48Updated 3 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- Kolmogorov–Arnold Networks with modified activation (using MLP to represent the activation)☆106Updated 10 months ago
- Toy genetic algorithm in Pytorch☆53Updated 4 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago
- We integrate discrete diffusion models with neurosymbolic predictors for scalable and calibrated learning and reasoning☆40Updated 3 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 7 months ago
- ☆32Updated 11 months ago
- Official PyTorch and JAX Implementation of "Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks"☆38Updated last year
- A simple example of VAEs with KANs☆12Updated last year
- ☆95Updated last year
- The 2D discrete wavelet transform for JAX☆43Updated 2 years ago
- ☆33Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆53Updated 5 months ago