Ads97 / ForwardForward
Explorations with Geoffrey Hinton's Forward Forward algoithm
☆32Updated 8 months ago
Related projects: ⓘ
- Reimplementation of Geoffrey Hinton's Forward-Forward Algorithm☆117Updated 10 months ago
- ☆46Updated 7 months ago
- Implementation/simulation of the predictive forward-forward credit assignment algorithm for training neurobiologically-plausible recurren…☆54Updated last year
- ☆42Updated 3 months ago
- Parallelizing non-linear sequential models over the sequence length☆40Updated last month
- Code for "Meta Learning Backpropagation And Improving It" @ NeurIPS 2021 https://arxiv.org/abs/2012.14905☆31Updated 2 years ago
- flexible meta-learning in jax☆12Updated 11 months ago
- The Energy Transformer block, in JAX☆48Updated 9 months ago
- ☆28Updated last week
- ☆25Updated 5 months ago
- ☆55Updated 2 years ago
- ☆42Updated 7 months ago
- ☆52Updated last month
- ☆34Updated 2 years ago
- ☆33Updated 8 months ago
- Official implementation of "Multi-scale Feature Learning Dynamics: Insights for Double Descent".☆16Updated 2 years ago
- Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]☆30Updated 3 weeks ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆57Updated 10 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆34Updated last year
- A simple Python implementation of forward-forward NN training by G. Hinton from NeurIPS 2022☆20Updated last year
- ☆50Updated last year
- ☆23Updated 6 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆77Updated last year
- Implementation of "Gradients without backpropagation" paper (https://arxiv.org/abs/2202.08587) using functorch☆94Updated last year
- Fast training of unitary deep network layers from low-rank updates☆28Updated last year
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆21Updated last year
- seqax = sequence modeling + JAX☆129Updated 2 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆50Updated 10 months ago
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆27Updated last year
- ☆19Updated last month