facebookresearch / memoryLinks
Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to store and retrieve information cheaply.
☆342Updated 7 months ago
Alternatives and similar repositories for memory
Users that are interested in memory are comparing it to the libraries listed below
Sorting:
- Tina: Tiny Reasoning Models via LoRA☆272Updated 2 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆237Updated 11 months ago
- Decentralized RL Training at Scale☆400Updated this week
- ☆190Updated 7 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆323Updated 3 months ago
- PyTorch building blocks for the OLMo ecosystem☆269Updated this week
- [ICML 2024] CLLMs: Consistency Large Language Models☆397Updated 8 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆337Updated 3 weeks ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆263Updated last week
- PyTorch implementation of models from the Zamba2 series.☆184Updated 6 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆244Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated last year
- A project to improve skills of large language models☆501Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆417Updated 2 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆226Updated 3 months ago
- ☆206Updated 5 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 6 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆290Updated last year
- Exploring Applications of GRPO☆245Updated 3 weeks ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆418Updated last week
- An extension of the nanoGPT repository for training small MOE models.☆163Updated 4 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆219Updated last month
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆321Updated 8 months ago
- Normalized Transformer (nGPT)☆185Updated 8 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆113Updated 2 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆318Updated 9 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆808Updated 2 weeks ago
- Simple & Scalable Pretraining for Neural Architecture Research☆277Updated last week
- Scalable toolkit for efficient model reinforcement☆558Updated this week
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 3 months ago