JacksonWuxs / Forward-Forward-NetworkLinks
Implementation of Forward Forward Network proposed by Hinton in NIPS 2022.
☆170Updated 2 years ago
Alternatives and similar repositories for Forward-Forward-Network
Users that are interested in Forward-Forward-Network are comparing it to the libraries listed below
Sorting:
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆344Updated 10 months ago
- ☆201Updated 2 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆378Updated last year
- Reimplementation of Geoffrey Hinton's Forward-Forward Algorithm☆162Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆223Updated last year
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆81Updated last year
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆372Updated 2 years ago
- ☆144Updated last year
- ☆33Updated 4 years ago
- ICLR 2023 Paper submission analysis from https://openreview.net/group?id=ICLR.cc/2023/Conference☆107Updated 3 years ago
- Implementation of "Attention Is Off By One" by Evan Miller☆198Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Updated 2 years ago
- ☆67Updated 4 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Updated last year
- Implementation of Linformer for Pytorch☆305Updated 2 years ago
- ☆106Updated last year
- ICLR2023 statistics☆59Updated 2 years ago
- A repository for DenseSSMs☆88Updated last year
- ☆222Updated 2 years ago
- OpenReivew Submission Visualization (ICLR 2024/2025)☆154Updated last year
- Crawl & visualize ICLR papers and reviews☆110Updated 3 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆68Updated 2 years ago
- ☆292Updated last year
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- PyTorch implementation of LIMoE☆52Updated last year
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Sequence modeling with Mega.☆303Updated 3 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆82Updated 2 years ago
- A curated list of Model Merging methods.☆96Updated 2 months ago