lightonai / dfa-scales-to-modern-deep-learning
Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural language processing.
☆88Updated 2 years ago
Alternatives and similar repositories for dfa-scales-to-modern-deep-learning:
Users that are interested in dfa-scales-to-modern-deep-learning are comparing it to the libraries listed below
- Python implementation of the methods in Meulemans et al. 2020 - A Theoretical Framework For Target Propagation☆32Updated 6 months ago
- Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).☆21Updated 5 years ago
- Experiments with Direct Feedback Alignment training scheme for DNNs☆32Updated 8 years ago
- ☆28Updated 6 years ago
- Fully documented Pytorch implementation of the Equilibrium Propagation algorithm.☆34Updated 5 years ago
- Implementation of feedback alignment learning in PyTorch☆31Updated last year
- PyTorch-based code for training fully-connected and convolutional networks using backpropagation (BP), feedback alignment (FA), direct fe…☆66Updated 4 years ago
- ☆12Updated 3 years ago
- Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input (NeurIPS 2019)☆13Updated last year
- BioTorch is a PyTorch framework specializing in biologically plausible learning algorithms☆50Updated last year
- Public code for Illing, Ventura, Bellec & Gerstner 2021: Local plasticity rules can learn deep representations using self-supervised cont…☆24Updated last year
- Automatic Hebbian learning in multi-layer convolutional networks with PyTorch, by expressing Hebbian plasticity rules as gradients☆38Updated last year
- Deep Learning without Weight Transport☆35Updated 5 years ago
- A lightweight and flexible framework for Hebbian learning in PyTorch.☆86Updated last year
- Example of Dense Associative Memory training on MNIST☆36Updated 2 years ago
- Implementation of "Gradients without backpropagation" paper (https://arxiv.org/abs/2202.08587) using functorch☆108Updated last year
- ☆28Updated last year
- Padé Activation Units: End-to-end Learning of Activation Functions in Deep Neural Network☆64Updated 4 years ago
- PyTorch implementation of linear and convolutional layers with fixed, random feedback weights.☆14Updated 4 years ago
- Memory efficient MAML using gradient checkpointing☆84Updated 5 years ago
- ZORB: A Derivative-Free Backpropagation Algorithm for Neural Networks☆22Updated 4 years ago
- PyTorch implementation of Mixer-nano (#parameters is 0.67M, originally Mixer-S/16 has 18M) with 90.83 % acc. on CIFAR-10. Training from s…☆32Updated 3 years ago
- Demo: Slightly More Bio-Plausible Backprop☆21Updated 8 years ago
- "Towards Scaling Difference Target Propagation by Learning Backprop Targets" (ICML 2022)☆12Updated 2 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆75Updated 9 months ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- paper lists and information on mean-field theory of deep learning☆74Updated 6 years ago
- Train self-modifying neural networks with neuromodulated plasticity☆76Updated 5 years ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆51Updated 4 years ago
- Official code for Coupled Oscillatory RNN (ICLR 2021, Oral)☆43Updated 3 years ago