lightonai / double-trouble-in-double-descentLinks
Double Trouble in the Double Descent Curve with Optical Processing Units.
☆12Updated 2 years ago
Alternatives and similar repositories for double-trouble-in-double-descent
Users that are interested in double-trouble-in-double-descent are comparing it to the libraries listed below
Sorting:
- Python implementation of supervised PCA, supervised random projections, and their kernel counterparts.☆20Updated 5 years ago
- Conformational exploration SARS-CoV-2 (coronavirus responsible for COVID-19)☆16Updated 2 years ago
- Optical Transfer Learning☆27Updated last year
- ML benchmarks performance featuring LightOn's Optical Processing Unit (OPU) vs CPU and GPU.☆22Updated last year
- Fast graph classifier with optical random features☆12Updated 4 years ago
- Double Descent Curve with Optical Random Features☆29Updated 2 years ago
- Code to perform Model-Free Episodic Control using Aurora OPUs☆17Updated 5 years ago
- Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).☆21Updated 5 years ago
- Implementation of NEWMA: a new method for scalable model-free online change-point detection☆45Updated 5 years ago
- Python client for the LightOn Muse API☆14Updated 2 years ago
- Python library for running large-scale computations on LightOn's OPUs☆36Updated 3 years ago
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆88Updated 2 years ago
- Inference code in Pytorch for GPT-like models, such as PAGnol, a family of models with up to 1.5B parameters, trained on datasets in Fren…☆20Updated 2 years ago
- Experiments with Direct Feedback Alignment and comparison to Backpropagation.☆8Updated 7 years ago
- Architecture embeddings independent from the parametrization of the search space☆15Updated 4 years ago
- Experiments with Direct Feedback Alignment training scheme for DNNs☆32Updated 8 years ago
- Demo: Slightly More Bio-Plausible Backprop☆21Updated 8 years ago
- Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input (NeurIPS 2019)☆13Updated last year
- Prototypes of differentiable differential equation solvers in JAX.☆27Updated 5 years ago
- Hessian backpropagation (HBP): PyTorch extension of backpropagation for block-diagonal curvature matrix approximations☆20Updated 2 years ago
- ☆12Updated 4 years ago
- Dive into Jax, Flax, XLA and C++☆31Updated 5 years ago
- MCMC methods for neural networks☆10Updated last year
- PyTorch-based code for training fully-connected and convolutional networks using backpropagation (BP), feedback alignment (FA), direct fe…☆66Updated 4 years ago
- Implementation of Nesterov and Polyak's (2006) cubic regularization algorithm and Cartis et al's (2011) adaptive cubic regularization alg…☆18Updated 3 years ago
- Code for UAI'19: Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning☆37Updated 4 years ago
- Layered distributions using FLAX/JAX☆10Updated 4 years ago
- Relative gradient optimization of the Jacobian term in unsupervised deep learning, NeurIPS 2020☆21Updated 4 years ago
- CUDA extension for the SPORCO project☆18Updated 3 years ago
- Morgan A. Schmitz., Matthieu Heitz, Nicolas Bonneel, Fred Ngole, David Coeurjolly, Marco Cuturi, Gabriel Peyré, and Jean-Luc Starck. "Was…☆19Updated 5 years ago