Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate nearest neighbors, in Pytorch
☆644Jul 17, 2023Updated 2 years ago
Alternatives and similar repositories for memorizing-transformers-pytorch
Users that are interested in memorizing-transformers-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆880Oct 30, 2023Updated 2 years ago
- ☆260Jun 6, 2025Updated 10 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆225Aug 20, 2024Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆390Jul 18, 2023Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Nov 13, 2020Updated 5 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆99Dec 31, 2021Updated 4 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆227Mar 25, 2026Updated last month
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,848Apr 26, 2026Updated last week
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Dec 8, 2022Updated 3 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Jul 9, 2023Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆190Jun 24, 2022Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆38Apr 11, 2022Updated 4 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in P…☆209Feb 14, 2024Updated 2 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆165Oct 4, 2021Updated 4 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Sep 6, 2024Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Jan 12, 2023Updated 3 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆193Jun 14, 2023Updated 2 years ago
- An implementation of local windowed attention for language modeling☆498Jul 16, 2025Updated 9 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆423Jan 6, 2025Updated last year
- Implementation of LogAvgExp for Pytorch☆37Apr 10, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆70Apr 10, 2023Updated 3 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,366Oct 28, 2024Updated last year
- Implementation of Feedback Transformer in Pytorch☆108Mar 2, 2021Updated 5 years ago
- Implementation of RQ Transformer, proposed in the paper "Autoregressive Image Generation using Residual Quantization"☆125Apr 19, 2022Updated 4 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,202Aug 22, 2023Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆426Dec 20, 2023Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆829Nov 9, 2022Updated 3 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆75Jan 20, 2022Updated 4 years ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,503Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,267Oct 18, 2022Updated 3 years ago
- Axial Positional Embedding for Pytorch☆84Feb 25, 2025Updated last year
- Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch☆66May 5, 2022Updated 3 years ago
- Contrastive Language-Audio Pretraining☆15May 18, 2021Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆102Feb 25, 2023Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,190Jun 21, 2023Updated 2 years ago