lightonai / principled-dfa-trainingLinks
Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).
☆21Updated 6 years ago
Alternatives and similar repositories for principled-dfa-training
Users that are interested in principled-dfa-training are comparing it to the libraries listed below
Sorting:
- Demo: Slightly More Bio-Plausible Backprop☆21Updated 8 years ago
- PyTorch-based code for training fully-connected and convolutional networks using backpropagation (BP), feedback alignment (FA), direct fe…☆65Updated 4 years ago
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆88Updated 3 years ago
- ML benchmarks performance featuring LightOn's Optical Processing Unit (OPU) vs CPU and GPU.☆22Updated last year
- Proximal Mean-field for Neural Network Quantization☆22Updated 5 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- Butterfly matrix multiplication in PyTorch☆172Updated last year
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆39Updated 3 years ago
- Optical Transfer Learning☆27Updated last year
- This repository contains the code for our recent paper `Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters'☆21Updated 6 years ago
- Python implementation of supervised PCA, supervised random projections, and their kernel counterparts.☆19Updated 5 years ago
- Conformational exploration SARS-CoV-2 (coronavirus responsible for COVID-19)☆16Updated 3 years ago
- Experiments with Direct Feedback Alignment training scheme for DNNs☆32Updated 8 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- Python library for running large-scale computations on LightOn's OPUs☆36Updated 3 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- Factorized Neural Layers☆29Updated 2 years ago
- ☆47Updated 5 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".