vpuhoff / noprop-dt-mnist-pytorchLinks
This repository contains an experimental PyTorch implementation exploring the NoProp algorithm, presented in the paper "NOPROP: TRAINING NEURAL NETWORKS WITHOUT BACK-PROPAGATION OR FORWARD-PROPAGATION". The goal of NoProp is to train neural networks without relying on traditional end-to-end backpropagation.
☆15Updated 4 months ago
Alternatives and similar repositories for noprop-dt-mnist-pytorch
Users that are interested in noprop-dt-mnist-pytorch are comparing it to the libraries listed below
Sorting:
- ☆50Updated 9 months ago
- ☆47Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated 2 weeks ago
- Contrastive Reinforcement Learning☆46Updated last month
- State Space Models☆70Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- The official GitHub page for the survey paper "A Survey of RWKV".☆29Updated 9 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆94Updated 8 months ago
- ☆22Updated last month
- ☆128Updated 2 months ago
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆37Updated last year
- ☆75Updated 8 months ago
- ScrollNet for Continual Learning☆11Updated 2 years ago
- A repository for DenseSSMs☆89Updated last year
- ☆22Updated 3 years ago
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆28Updated 4 months ago
- ☆137Updated last year
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆118Updated this week
- PyTorch implementation of the groundbreaking paper "NoProp: Training Neural Networks Without Backpropagation or Forward Propagation".☆62Updated 6 months ago
- Implementation of a Hierarchical Mamba as described in the paper: "Hierarchical State Space Models for Continuous Sequence-to-Sequence Mo…☆13Updated 11 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆66Updated 2 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last week
- A simpler Pytorch + Zeta Implementation of the paper: "SiMBA: Simplified Mamba-based Architecture for Vision and Multivariate Time series…☆28Updated 11 months ago
- Official implementation of Adaptive Feature Transfer (AFT)☆23Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆207Updated last week
- Simba☆214Updated last year
- 🔥MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]☆21Updated last year
- Official implementation of paper:Towards Deeper Level Decomposition of Linear and Nonlinear Patterns in Time Series.☆17Updated last month
- ☆35Updated 5 months ago
- Toy genetic algorithm in Pytorch☆52Updated 6 months ago