loeweX / Forward-ForwardLinks
Reimplementation of Geoffrey Hinton's Forward-Forward Algorithm
☆157Updated last year
Alternatives and similar repositories for Forward-Forward
Users that are interested in Forward-Forward are comparing it to the libraries listed below
Sorting:
- Implementation/simulation of the predictive forward-forward credit assignment algorithm for training neurobiologically-plausible recurren…☆60Updated 2 years ago
- Implementation of Forward Forward Network proposed by Hinton in NIPS 2022.☆170Updated 2 years ago
- ☆306Updated 10 months ago
- Neural Networks and the Chomsky Hierarchy☆211Updated last year
- Spyx: Spiking Neural Networks in JAX☆126Updated last year
- ☆188Updated last year
- Implementation of "Gradients without backpropagation" paper (https://arxiv.org/abs/2202.08587) using functorch☆113Updated 2 years ago
- ☆234Updated 8 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆221Updated last year
- Code to simulate energy-based analog systems and equilibrium propagation☆30Updated 7 months ago
- Forward Pass Learning and Inference Library, for neural networks and general intelligence, Signal Propagation (sigprop)☆55Updated 2 years ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆173Updated 2 years ago
- Easy Hypernetworks in Pytorch and Jax☆105Updated 2 years ago
- Annotated version of the Mamba paper☆490Updated last year
- Implementation of https://srush.github.io/annotated-s4☆504Updated 4 months ago
- ☆164Updated 2 years ago
- NGC-Learn: Neurobiological Systems Simulation and NeuroAI Design in Python☆164Updated this week
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆71Updated 6 months ago
- ☆219Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆125Updated last year
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆59Updated 2 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆107Updated 4 years ago
- PyTorch implementation of Mixer-nano (#parameters is 0.67M, originally Mixer-S/16 has 18M) with 90.83 % acc. on CIFAR-10. Training from s…☆36Updated 4 years ago
- A lightweight and flexible framework for Hebbian learning in PyTorch.☆91Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆79Updated 3 years ago
- Predictive Coding JAX-based library☆83Updated 6 months ago
- ☆166Updated 2 years ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- Code for NEMO, and Assembly Calculus☆107Updated 11 months ago
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆89Updated 3 years ago