antofuller / configaformersView external linksLinks
A python library for highly configurable transformers - easing model architecture search and experimentation.
☆49Nov 30, 2021Updated 4 years ago
Alternatives and similar repositories for configaformers
Users that are interested in configaformers are comparing it to the libraries listed below
Sorting:
- ☆21Mar 15, 2023Updated 2 years ago
- a lightweight transformer library for PyTorch☆72Nov 2, 2021Updated 4 years ago
- Code for the paper PermuteFormer☆41Oct 10, 2021Updated 4 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Jan 16, 2022Updated 4 years ago
- Official Pytorch Implementation for the paper 'SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients'☆17Jan 12, 2022Updated 4 years ago
- Python Research Framework☆107Nov 3, 2022Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Implementation of Multistream Transformers in Pytorch☆54Jul 31, 2021Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- Code publication to the paper "Normalized Attention Without Probability Cage"☆17Nov 9, 2021Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Mar 3, 2021Updated 4 years ago
- Fast Discounted Cumulative Sums in PyTorch☆97Aug 28, 2021Updated 4 years ago
- Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP☆39Nov 26, 2022Updated 3 years ago
- ☆22May 3, 2022Updated 3 years ago
- Implementations of various tasks related to deep learning and GUI application☆15Mar 20, 2023Updated 2 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Jun 26, 2021Updated 4 years ago
- A JAX nn library☆22Sep 9, 2025Updated 5 months ago
- A GPT, made only of MLPs, in Jax☆59Jun 23, 2021Updated 4 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆120Aug 9, 2021Updated 4 years ago
- ☆81Jan 21, 2022Updated 4 years ago
- GPT, but made only out of MLPs☆89May 25, 2021Updated 4 years ago
- Anime Character Segmentation☆11Aug 31, 2020Updated 5 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆88Dec 3, 2021Updated 4 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Sep 14, 2022Updated 3 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆98Dec 31, 2021Updated 4 years ago
- pytorch implementation of trDesign☆44Mar 19, 2021Updated 4 years ago
- Source code for "N-ary Constituent Tree Parsing with Recursive Semi-Markov Model" published at ACL 2021☆10May 27, 2021Updated 4 years ago
- Open-Retrieval Conversational Machine Reading: A new setting & OR-ShARC dataset☆13Nov 19, 2022Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- Contrastive Language-Image Pretraining☆144Sep 6, 2022Updated 3 years ago
- official code for EMNLP21 paper☆36Dec 14, 2021Updated 4 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆99Jan 13, 2021Updated 5 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Nov 2, 2020Updated 5 years ago
- 🖼️📊☆11Jun 9, 2020Updated 5 years ago
- Notebooks of experiences with the fastai library☆11Oct 13, 2018Updated 7 years ago
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆12Aug 23, 2025Updated 5 months ago
- ☆10Sep 13, 2021Updated 4 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆39Aug 3, 2021Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Apr 6, 2022Updated 3 years ago