Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to store and retrieve information cheaply.
☆374Dec 12, 2024Updated last year
Alternatives and similar repositories for memory
Users that are interested in memory are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆144Feb 25, 2026Updated 3 weeks ago
- Large Concept Models: Language modeling in a sentence representation space☆2,342Jan 29, 2025Updated last year
- Official repo of paper LM2☆47Feb 13, 2025Updated last year
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆589Feb 11, 2025Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆252Jan 31, 2025Updated last year
- ☆133Jun 6, 2025Updated 9 months ago
- ☆136May 29, 2025Updated 9 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated last month
- ☆125Feb 4, 2026Updated last month
- Code for BLT research paper☆2,030Nov 3, 2025Updated 4 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,536Aug 12, 2025Updated 7 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- Pretraining and inference code for a large-scale depth-recurrent language model☆865Dec 29, 2025Updated 2 months ago
- MEXMA: Token-level objectives improve sentence representations☆43Jan 6, 2025Updated last year
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆476May 17, 2025Updated 10 months ago
- Scalable RL solution for advanced reasoning of language models☆1,821Mar 18, 2025Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,539Feb 13, 2026Updated last month
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆300Jul 28, 2025Updated 7 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆674Apr 25, 2025Updated 10 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆344Dec 16, 2025Updated 3 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆224Dec 16, 2025Updated 3 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆361Feb 5, 2026Updated last month
- Stick-breaking attention☆62Jul 1, 2025Updated 8 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 2 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆977Feb 5, 2026Updated last month
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆560Jan 13, 2025Updated last year
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆331Jan 5, 2026Updated 2 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆449Oct 16, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,757Jul 18, 2025Updated 8 months ago
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆450Jan 26, 2026Updated last month
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆103Jun 14, 2024Updated last year
- The original Shared Recurrent Memory Transformer implementation☆34Jul 11, 2025Updated 8 months ago
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆375Apr 23, 2024Updated last year
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆60Oct 31, 2024Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆352Oct 22, 2024Updated last year