facebookresearch / memoryLinks
Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to store and retrieve information cheaply.
☆353Updated 11 months ago
Alternatives and similar repositories for memory
Users that are interested in memory are comparing it to the libraries listed below
Sorting:
- Tina: Tiny Reasoning Models via LoRA☆304Updated last month
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆264Updated 2 weeks ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆558Updated last week
- PyTorch building blocks for the OLMo ecosystem☆317Updated this week
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆296Updated 2 weeks ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆405Updated 11 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆344Updated 6 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆248Updated 9 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆223Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆450Updated 5 months ago
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- ☆201Updated 11 months ago
- ☆225Updated 3 weeks ago
- A project to improve skills of large language models☆608Updated this week
- Normalized Transformer (nGPT)☆192Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆292Updated last year
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆551Updated last month
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆232Updated 3 months ago
- Exploring Applications of GRPO☆248Updated 2 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆327Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆231Updated 3 weeks ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 9 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆336Updated 11 months ago
- Minimal hackable GRPO implementation☆300Updated 9 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆370Updated 4 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆843Updated 3 weeks ago
- Simple & Scalable Pretraining for Neural Architecture Research☆298Updated last week
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆179Updated 4 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated last week