lightonai / principled-dfa-training
Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).
☆21Updated 5 years ago
Alternatives and similar repositories for principled-dfa-training:
Users that are interested in principled-dfa-training are comparing it to the libraries listed below
- ML benchmarks performance featuring LightOn's Optical Processing Unit (OPU) vs CPU and GPU.☆21Updated last year
- Conformational exploration SARS-CoV-2 (coronavirus responsible for COVID-19)☆16Updated 2 years ago
- Python implementation of supervised PCA, supervised random projections, and their kernel counterparts.☆20Updated 4 years ago
- Double Trouble in the Double Descent Curve with Optical Processing Units.☆12Updated 2 years ago
- Optical Transfer Learning☆27Updated last year
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆86Updated 2 years ago
- Fast graph classifier with optical random features☆12Updated 3 years ago
- Experiments with Direct Feedback Alignment training scheme for DNNs☆31Updated 7 years ago
- Code to perform Model-Free Episodic Control using Aurora OPUs☆17Updated 4 years ago
- Double Descent Curve with Optical Random Features☆28Updated 2 years ago
- PyTorch-based code for training fully-connected and convolutional networks using backpropagation (BP), feedback alignment (FA), direct fe…☆64Updated 3 years ago
- Experiments with Direct Feedback Alignment and comparison to Backpropagation.☆8Updated 7 years ago
- Demo: Slightly More Bio-Plausible Backprop☆21Updated 7 years ago
- Training neural networks with back-prop, feedback-alignment and direct feedback-alignment☆100Updated 7 years ago
- Python library for running large-scale computations on LightOn's OPUs☆35Updated 2 years ago
- Tensorflow implementation of Direct and Random feedback Alignment☆24Updated 8 years ago
- Architecture embeddings independent from the parametrization of the search space☆15Updated 3 years ago
- Python client for the LightOn Muse API☆14Updated 2 years ago
- Github page for SSDFA☆11Updated 4 years ago
- ☆27Updated 5 years ago
- This repository contains the code for our recent paper `Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters'☆21Updated 6 years ago
- Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input (NeurIPS 2019)☆12Updated 9 months ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆37Updated 3 years ago
- ☆12Updated 3 years ago
- Implementation of feedback alignment learning in PyTorch☆29Updated last year
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆16Updated 4 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆73Updated 5 years ago
- PyTorch implementation of linear and convolutional layers with fixed, random feedback weights.☆13Updated 3 years ago
- Implementation of NEWMA: a new method for scalable model-free online change-point detection☆46Updated 4 years ago