lightonai / dfa-scales-to-modern-deep-learningLinks
Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural language processing.
ā90Updated 3 years ago
Alternatives and similar repositories for dfa-scales-to-modern-deep-learning
Users that are interested in dfa-scales-to-modern-deep-learning are comparing it to the libraries listed below
Sorting:
- š§ Pytorch code for the Fromage optimiser.ā129Updated last year
- Structured matrices for compressing neural networksā67Updated 2 years ago
- ā67Updated 6 years ago
- Hessian spectral density estimation in TF and Jaxā124Updated 5 years ago
- Experiments for Meta-Learning Symmetries by Reparameterizationā57Updated 4 years ago
- Fully documented Pytorch implementation of the Equilibrium Propagation algorithm.ā37Updated 5 years ago
- PyTorch-SSO: Scalable Second-Order methods in PyTorchā148Updated 2 years ago
- š© Pytorch and Jax code for the Madam optimiser.ā53Updated 4 years ago
- ā100Updated 3 years ago
- Python implementation of the methods in Meulemans et al. 2020 - A Theoretical Framework For Target Propagationā32Updated last year
- Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input (NeurIPS 2019)ā14Updated last year
- Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).ā23Updated 6 years ago
- ā45Updated 6 years ago
- Memory efficient MAML using gradient checkpointingā86Updated 5 years ago
- Butterfly matrix multiplication in PyTorchā176Updated 2 years ago
- Computing the eigenvalues of Neural Tangent Kernel and Conjugate Kernel (aka NNGP kernel) over the boolean cubeā47Updated 6 years ago
- Official code for Coupled Oscillatory RNN (ICLR 2021, Oral)ā50Updated 4 years ago
- Neural Turing Machines in pytorchā48Updated 3 years ago
- Reparameterize your PyTorch modulesā71Updated 4 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.ā77Updated last year
- Estimating Gradients for Discrete Random Variables by Sampling without Replacementā40Updated 5 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.ā110Updated 4 years ago
- ā54Updated last year
- Code for the paper: "Tensor Programs II: Neural Tangent Kernel for Any Architecture"ā104Updated 5 years ago
- A custom PyTorch layer that is capable of implementing extremely wide and sparse linear layers efficientlyā51Updated last year
- The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) pā¦ā41Updated 5 years ago
- Code for experiments in my blog post on the Neural Tangent Kernel: https://eigentales.com/NTKā173Updated 6 years ago
- ā133Updated 4 years ago
- Jupyter Notebook corresponding to 'Going with the Flow: An Introduction to Normalizing Flows'ā27Updated 4 years ago
- Experiments for the paper "Exponential expressivity in deep neural networks through transient chaos"ā73Updated 9 years ago