lucidrains / memorizing-transformers-pytorchView external linksLinks
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate nearest neighbors, in Pytorch
☆641Jul 17, 2023Updated 2 years ago
Alternatives and similar repositories for memorizing-transformers-pytorch
Users that are interested in memorizing-transformers-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆879Oct 30, 2023Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆224Aug 20, 2024Updated last year
- ☆259Jun 6, 2025Updated 8 months ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆98Dec 31, 2021Updated 4 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆190Jun 24, 2022Updated 3 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Jul 9, 2023Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,800Feb 7, 2026Updated last week
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆224Aug 20, 2024Updated last year
- Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in P…☆207Feb 14, 2024Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Nov 13, 2020Updated 5 years ago
- Implementation of LogAvgExp for Pytorch☆37Apr 10, 2025Updated 10 months ago
- My explorations into editing the knowledge and memories of an attention network☆35Dec 8, 2022Updated 3 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆163Oct 4, 2021Updated 4 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Jan 6, 2025Updated last year
- Implementation of Multistream Transformers in Pytorch☆54Jul 31, 2021Updated 4 years ago
- Implementation of Feedback Transformer in Pytorch☆108Mar 2, 2021Updated 4 years ago
- Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch☆66May 5, 2022Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Nov 30, 2021Updated 4 years ago
- Implementation of a Transformer, but completely in Triton☆279Apr 5, 2022Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆828Nov 9, 2022Updated 3 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,384Oct 28, 2024Updated last year
- Implementation of RQ Transformer, proposed in the paper "Autoregressive Image Generation using Residual Quantization"☆124Apr 19, 2022Updated 3 years ago
- PyTorch interface for TrueGrad Optimizers☆43Aug 8, 2023Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆722Oct 16, 2023Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,184Nov 27, 2024Updated last year
- Reformer, the efficient Transformer, in Pytorch☆2,193Jun 21, 2023Updated 2 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆195Jun 14, 2023Updated 2 years ago
- One stop shop for all things carp☆59Sep 9, 2022Updated 3 years ago
- maximal update parametrization (µP)☆1,676Jul 17, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,351Updated this week
- Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning☆769Apr 7, 2023Updated 2 years ago
- Implementation of ETSformer, state of the art time-series Transformer, in Pytorch☆155Aug 26, 2023Updated 2 years ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,066Mar 7, 2024Updated last year
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Nov 1, 2025Updated 3 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆426Dec 20, 2023Updated 2 years ago
- An open source implementation of CLIP.☆33Nov 7, 2022Updated 3 years ago
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago