facebookresearch / memory
Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to store and retrieve information cheaply.
β311Updated 3 months ago
Alternatives and similar repositories for memory:
Users that are interested in memory are comparing it to the libraries listed below
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β226Updated 2 months ago
- πΎ OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.β300Updated last week
- β173Updated 3 months ago
- PyTorch building blocks for the OLMo ecosystemβ177Updated this week
- [ICML 2024] CLLMs: Consistency Large Language Modelsβ388Updated 4 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.β195Updated 8 months ago
- PyTorch implementation of models from the Zamba2 series.β178Updated 2 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024β279Updated last month
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.β168Updated 2 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β287Updated 10 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).β210Updated last week
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ529Updated last week
- Pretraining code for a large-scale depth-recurrent language modelβ709Updated 2 weeks ago
- Efficient LLM Inference over Long Sequencesβ365Updated last month
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMsβ149Updated last week
- Implementation of π₯₯ Coconut, Chain of Continuous Thought, in Pytorchβ161Updated 3 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.β212Updated 7 months ago
- Normalized Transformer (nGPT)β164Updated 4 months ago
- β493Updated last week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β399Updated 3 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β706Updated 6 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasksβ140Updated this week
- Build your own visual reasoning modelβ320Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ442Updated last month
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ209Updated 3 weeks ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"β477Updated 2 weeks ago
- A project to improve skills of large language modelsβ260Updated this week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.β405Updated 11 months ago
- β158Updated last month
- β262Updated 2 weeks ago