lightonai / principled-dfa-training
Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).
☆22Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for principled-dfa-training
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆84Updated 2 years ago
- ML benchmarks performance featuring LightOn's Optical Processing Unit (OPU) vs CPU and GPU.☆21Updated last year
- Python implementation of supervised PCA, supervised random projections, and their kernel counterparts.☆20Updated 4 years ago
- Experiments with Direct Feedback Alignment training scheme for DNNs☆31Updated 7 years ago
- Conformational exploration SARS-CoV-2 (coronavirus responsible for COVID-19)☆16Updated 2 years ago
- Optical Transfer Learning☆27Updated last year
- PyTorch-based code for training fully-connected and convolutional networks using backpropagation (BP), feedback alignment (FA), direct fe…☆63Updated 3 years ago
- Code to perform Model-Free Episodic Control using Aurora OPUs☆17Updated 4 years ago
- Demo: Slightly More Bio-Plausible Backprop☆22Updated 7 years ago
- Double Trouble in the Double Descent Curve with Optical Processing Units.☆12Updated 2 years ago
- Fast graph classifier with optical random features☆12Updated 3 years ago
- Training neural networks with back-prop, feedback-alignment and direct feedback-alignment☆101Updated 6 years ago
- Experiments with Direct Feedback Alignment and comparison to Backpropagation.☆8Updated 7 years ago
- Double Descent Curve with Optical Random Features☆27Updated 2 years ago
- Tensorflow implementation of Direct and Random feedback Alignment☆24Updated 7 years ago
- ☆13Updated 3 years ago
- Python library for running large-scale computations on LightOn's OPUs☆35Updated 2 years ago
- This repository contains the code for our recent paper `Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters'☆21Updated 6 years ago
- Fully documented Pytorch implementation of the Equilibrium Propagation algorithm.☆31Updated 4 years ago
- Github page for SSDFA☆11Updated 4 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆37Updated 2 years ago
- Python client for the LightOn Muse API☆14Updated 2 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆16Updated 3 years ago
- Regularization, Neural Network Training Dynamics☆14Updated 4 years ago
- Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input (NeurIPS 2019)☆12Updated 7 months ago
- ☆27Updated 5 years ago
- ☆35Updated 5 years ago
- Deep learning with a multiplication budget☆47Updated 6 years ago
- Implementation of feedback alignment learning in PyTorch☆29Updated last year