vpuhoff / noprop-dt-mnist-pytorchLinks
This repository contains an experimental PyTorch implementation exploring the NoProp algorithm, presented in the paper "NOPROP: TRAINING NEURAL NETWORKS WITHOUT BACK-PROPAGATION OR FORWARD-PROPAGATION". The goal of NoProp is to train neural networks without relying on traditional end-to-end backpropagation.
☆13Updated 2 months ago
Alternatives and similar repositories for noprop-dt-mnist-pytorch
Users that are interested in noprop-dt-mnist-pytorch are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- ☆49Updated 7 months ago
- The official GitHub page for the survey paper "A Survey of RWKV".☆29Updated 7 months ago
- State Space Models☆70Updated last year
- ScrollNet for Continual Learning☆11Updated last year
- Contrastive Reinforcement Learning☆30Updated this week
- A repository for DenseSSMs☆88Updated last year
- Repository for the feature selection with tabular models☆65Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆36Updated 11 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆91Updated 6 months ago
- Official implementation of Adaptive Feature Transfer (AFT)☆23Updated last year
- ☆72Updated 7 months ago
- ☆16Updated 2 years ago
- ☆13Updated 2 years ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- [NeurIPS 2023] The PyTorch Implementation of Scheduled (Stable) Weight Decay.☆60Updated last year
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆65Updated 3 weeks ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated this week
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆33Updated last year
- PyTorch implementation of the groundbreaking paper "NoProp: Training Neural Networks Without Backpropagation or Forward Propagation".☆62Updated 4 months ago
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆119Updated 2 weeks ago
- Autoregressive Image Generation☆32Updated 2 months ago
- ☆136Updated last year
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆29Updated last year
- ☆70Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆204Updated 3 weeks ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆77Updated last year
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆31Updated last year
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆28Updated 2 months ago