lightonai / double-trouble-in-double-descentLinks
Double Trouble in the Double Descent Curve with Optical Processing Units.
☆12Updated 3 years ago
Alternatives and similar repositories for double-trouble-in-double-descent
Users that are interested in double-trouble-in-double-descent are comparing it to the libraries listed below
Sorting:
- Python implementation of supervised PCA, supervised random projections, and their kernel counterparts.☆20Updated 5 years ago
- Conformational exploration SARS-CoV-2 (coronavirus responsible for COVID-19)☆16Updated 3 years ago
- ML benchmarks performance featuring LightOn's Optical Processing Unit (OPU) vs CPU and GPU.☆22Updated last year
- Optical Transfer Learning☆27Updated last year
- Double Descent Curve with Optical Random Features☆29Updated 3 years ago
- Fast graph classifier with optical random features☆12Updated 4 years ago
- Code to perform Model-Free Episodic Control using Aurora OPUs☆17Updated 5 years ago
- Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).☆21Updated 6 years ago
- Implementation of NEWMA: a new method for scalable model-free online change-point detection☆47Updated 5 years ago
- Python client for the LightOn Muse API☆14Updated 2 years ago
- Python library for running large-scale computations on LightOn's OPUs☆36Updated 3 years ago
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆88Updated 3 years ago
- Public rankings of extreme-scale models☆13Updated 3 years ago
- Architecture embeddings independent from the parametrization of the search space☆15Updated 4 years ago
- Inference code in Pytorch for GPT-like models, such as PAGnol, a family of models with up to 1.5B parameters, trained on datasets in Fren…☆20Updated 2 years ago
- RKHS feature vectors, operators, and statistical models using JAX for automatic differentiation☆8Updated 4 years ago
- Experiments with Direct Feedback Alignment and comparison to Backpropagation.☆8Updated 7 years ago
- Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input (NeurIPS 2019)☆14Updated last year
- Experiments with Direct Feedback Alignment training scheme for DNNs☆32Updated 8 years ago
- orbital MCMC☆10Updated 4 years ago
- Dive into Jax, Flax, XLA and C++☆31Updated 5 years ago
- PyTorch-based code for training fully-connected and convolutional networks using backpropagation (BP), feedback alignment (FA), direct fe…☆66Updated 4 years ago
- Prototypes of differentiable differential equation solvers in JAX.☆27Updated 5 years ago
- Convolutions and more as einsum for PyTorch☆16Updated last year
- Layered distributions using FLAX/JAX☆10Updated 4 years ago
- Matlab code implementing Hamiltonian Annealed Importance Sampling for importance weight, partition function, and log likelihood estimatio…☆26Updated 10 years ago
- Discontinuous Hamiltonian Monte Carlo in JAX☆41Updated 5 years ago
- A differentiation API for PyTorch☆30Updated 5 years ago
- simple JAX-/NumPy-based implementations of NGD with exact/approximate Fisher Information Matrix both in parameter-space and function-spac…☆14Updated 4 years ago
- Python code (packaged in Docker container) to run the experiments in "A Greedy Algorithm for Quantizing Neural Networks" by Eric Lybrand …☆20Updated 4 years ago