Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to store and retrieve information cheaply.
☆375Dec 12, 2024Updated last year
Alternatives and similar repositories for memory
Users that are interested in memory are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆150Feb 25, 2026Updated 2 months ago
- Large Concept Models: Language modeling in a sentence representation space☆2,349Jan 29, 2025Updated last year
- Official repo of paper LM2☆47Feb 13, 2025Updated last year
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆588Feb 11, 2025Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆255Jan 31, 2025Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆135Jun 6, 2025Updated 10 months ago
- ☆139May 29, 2025Updated 11 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆250Sep 12, 2025Updated 7 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 3 months ago
- ☆130Feb 4, 2026Updated 2 months ago
- Code for BLT research paper☆2,035Nov 3, 2025Updated 5 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆344Feb 23, 2025Updated last year
- Pretraining and inference code for a large-scale depth-recurrent language model☆879Dec 29, 2025Updated 4 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- MEXMA: Token-level objectives improve sentence representations☆43Jan 6, 2025Updated last year
- Training Large Language Model to Reason in a Continuous Latent Space☆1,593Apr 8, 2026Updated 3 weeks ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆479May 17, 2025Updated 11 months ago
- Scalable RL solution for advanced reasoning of language models☆1,852Mar 18, 2025Updated last year
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch☆1,952Feb 9, 2026Updated 2 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆678Apr 25, 2025Updated last year
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆312Jul 28, 2025Updated 9 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,540Feb 13, 2026Updated 2 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆351Updated this week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆367Apr 13, 2026Updated 2 weeks ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆225Dec 16, 2025Updated 4 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 4 months ago
- Stick-breaking attention☆63Jul 1, 2025Updated 10 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆993Feb 5, 2026Updated 2 months ago
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆342Jan 5, 2026Updated 3 months ago
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆570Jan 13, 2025Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆103Jun 14, 2024Updated last year
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆453Jan 26, 2026Updated 3 months ago
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,762Jul 18, 2025Updated 9 months ago
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆376Apr 23, 2024Updated 2 years ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆60Oct 31, 2024Updated last year
- The original Shared Recurrent Memory Transformer implementation☆35Jul 11, 2025Updated 9 months ago