lightonai / principled-dfa-training
Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).
☆22Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for principled-dfa-training
- Optical Transfer Learning☆27Updated last year
- ML benchmarks performance featuring LightOn's Optical Processing Unit (OPU) vs CPU and GPU.☆21Updated last year
- Python implementation of supervised PCA, supervised random projections, and their kernel counterparts.☆20Updated 4 years ago
- Double Trouble in the Double Descent Curve with Optical Processing Units.☆12Updated 2 years ago
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆84Updated 2 years ago
- Conformational exploration SARS-CoV-2 (coronavirus responsible for COVID-19)☆16Updated 2 years ago
- PyTorch-based code for training fully-connected and convolutional networks using backpropagation (BP), feedback alignment (FA), direct fe…☆63Updated 3 years ago
- Code to perform Model-Free Episodic Control using Aurora OPUs☆17Updated 4 years ago
- Double Descent Curve with Optical Random Features☆27Updated 2 years ago
- Fast graph classifier with optical random features☆12Updated 3 years ago
- Experiments with Direct Feedback Alignment training scheme for DNNs☆31Updated 7 years ago
- Experiments with Direct Feedback Alignment and comparison to Backpropagation.☆8Updated 7 years ago
- Demo: Slightly More Bio-Plausible Backprop☆22Updated 7 years ago
- Python library for running large-scale computations on LightOn's OPUs☆35Updated 2 years ago
- Training neural networks with back-prop, feedback-alignment and direct feedback-alignment☆101Updated 6 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆37Updated 2 years ago
- ☆13Updated 3 years ago
- Tensorflow implementation of Direct and Random feedback Alignment☆24Updated 8 years ago
- Github page for SSDFA☆11Updated 4 years ago
- Python client for the LightOn Muse API☆14Updated 2 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆16Updated 3 years ago
- ☆46Updated 5 years ago
- ☆53Updated 5 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆73Updated 4 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- Fully documented Pytorch implementation of the Equilibrium Propagation algorithm.☆31Updated 4 years ago
- Implementation of BinaryConnect on Pytorch☆36Updated 3 years ago
- Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input (NeurIPS 2019)☆12Updated 7 months ago
- Reproduction of "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization" for the Reproducibility challenge@NeurIPS…☆11Updated 4 years ago