jloveric / high-order-layers-torchLinks
High order and sparse layers in pytorch. Lagrange Polynomial, Piecewise Lagrange Polynomial, Piecewise Discontinuous Lagrange Polynomial (Chebyshev nodes) and Fourier Series layers of arbitrary order. Piecewise implementations could be thought of as a 1d grid (for each neuron) where each grid element is Lagrange polynomial. Both full connecte…
☆45Updated last year
Alternatives and similar repositories for high-order-layers-torch
Users that are interested in high-order-layers-torch are comparing it to the libraries listed below
Sorting:
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- Deep Networks Grok All the Time and Here is Why☆38Updated last year
- ☆97Updated last year
- Recursive Leasting Squares (RLS) with Neural Network for fast learning☆57Updated 2 years ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Diffusion models in PyTorch☆116Updated last week
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆51Updated last year
- Kolmogorov–Arnold Networks with modified activation (using MLP to represent the activation)☆107Updated last month
- C++ and Cuda ops for fused FourierKAN☆82Updated last year
- Toy genetic algorithm in Pytorch☆54Updated 7 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- Official repo for BWLer: Barycentric Weight Layer☆25Updated 2 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆67Updated last week
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 10 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 8 months ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆85Updated last year
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆59Updated 2 years ago
- A simple example of VAEs with KANs☆11Updated last year
- ☆61Updated last year
- A practical implementation of GradNorm, Gradient Normalization for Adaptive Loss Balancing, in Pytorch☆118Updated 3 months ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆105Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆90Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 11 months ago
- Clifford-Steerable Convolutional Neural Networks [ICML'24]☆49Updated 6 months ago
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆29Updated last year
- The 2D discrete wavelet transform for JAX☆44Updated 2 years ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- Free-form flows are a generative model training a pair of neural networks via maximum likelihood☆49Updated 5 months ago
- Kolmogorov-Arnold Networks with various basis functions like B-Splines, Fourier, Chebyshev, Wavelets etc☆35Updated last year