jloveric / high-order-layers-torchLinks
High order and sparse layers in pytorch. Lagrange Polynomial, Piecewise Lagrange Polynomial, Piecewise Discontinuous Lagrange Polynomial (Chebyshev nodes) and Fourier Series layers of arbitrary order. Piecewise implementations could be thought of as a 1d grid (for each neuron) where each grid element is Lagrange polynomial. Both full connecte…
☆45Updated last year
Alternatives and similar repositories for high-order-layers-torch
Users that are interested in high-order-layers-torch are comparing it to the libraries listed below
Sorting:
- A State-Space Model with Rational Transfer Function Representation.☆82Updated last year
- Diffusion models in PyTorch☆111Updated 3 weeks ago
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- Recursive Leasting Squares (RLS) with Neural Network for fast learning☆55Updated last year
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆51Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- ☆96Updated last year
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- ☆58Updated last year
- C++ and Cuda ops for fused FourierKAN☆80Updated last year
- Clifford-Steerable Convolutional Neural Networks [ICML'24]☆49Updated 5 months ago
- A simple example of VAEs with KANs☆12Updated last year
- Kolmogorov–Arnold Networks with modified activation (using MLP to represent the activation)☆105Updated 2 weeks ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆66Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆102Updated 9 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 8 months ago
- We integrate discrete diffusion models with neurosymbolic predictors for scalable and calibrated learning and reasoning☆50Updated 2 weeks ago
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆29Updated last year
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆87Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- Toy genetic algorithm in Pytorch☆52Updated 5 months ago
- Free-form flows are a generative model training a pair of neural networks via maximum likelihood☆49Updated 3 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆120Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 6 months ago
- Code repository of the paper "Variational Stochastic Gradient Descent for Deep Neural Networks" published at☆41Updated 5 months ago
- Transformers with doubly stochastic attention☆48Updated 3 years ago
- ☆33Updated last year
- ☆127Updated 2 months ago
- Official Implementation of the ICML 2023 paper: "Neural Wave Machines: Learning Spatiotemporally Structured Representations with Locally …☆74Updated 2 years ago