Implementation of Memformer, a Memory-augmented Transformer, in Pytorch
☆126Nov 13, 2020Updated 5 years ago
Alternatives and similar repositories for memformer
Users that are interested in memformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Jul 31, 2020Updated 5 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆165Oct 4, 2021Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Nov 8, 2020Updated 5 years ago
- Implementation of Feedback Transformer in Pytorch☆108Mar 2, 2021Updated 5 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Usable implementation of Mogrifier, a circuit for enhancing LSTMs and potentially other networks, from Deepmind☆22Jun 9, 2024Updated last year
- A simple Transformer where the softmax has been replaced with normalization☆20Sep 11, 2020Updated 5 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Aug 4, 2021Updated 4 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆225Aug 20, 2024Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆423Jan 6, 2025Updated last year
- A GPT, made only of MLPs, in Jax☆59Jun 23, 2021Updated 4 years ago
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆644Jul 17, 2023Updated 2 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆70Dec 24, 2022Updated 3 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆68Jan 10, 2023Updated 3 years ago
- An open source implementation of CLIP.☆11Mar 26, 2023Updated 3 years ago
- Combining encoder-based language models☆11Nov 11, 2021Updated 4 years ago
- An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up proc…☆196Mar 27, 2021Updated 5 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆54Mar 30, 2021Updated 5 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Oct 30, 2021Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Apr 6, 2022Updated 4 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Aug 3, 2021Updated 4 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Jun 19, 2022Updated 3 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Data Release for VALUE Benchmark☆30Feb 16, 2022Updated 4 years ago
- Fully featured implementation of Routing Transformer☆301Nov 6, 2021Updated 4 years ago
- Explorations into improving ViTArc with Slot Attention☆43Oct 19, 2024Updated last year
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆97Feb 19, 2021Updated 5 years ago
- Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper☆155Apr 27, 2021Updated 5 years ago
- Local Attention - Flax module for Jax☆22May 26, 2021Updated 4 years ago
- Implementation of the DDPM + IPA (invariant point attention) for protein generation, as outlined in the paper "Protein Structure and Sequ…☆89Jul 25, 2022Updated 3 years ago
- ☆68Aug 29, 2024Updated last year
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30May 31, 2022Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Mar 3, 2021Updated 5 years ago
- Implementation of Hierarchical Transformer Memory (HTM) for Pytorch☆76Sep 15, 2021Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Dec 8, 2022Updated 3 years ago
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆37Sep 23, 2024Updated last year
- Associative scan package for DRYing some code between repos☆18Jan 5, 2026Updated 4 months ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago