facebookresearch / memoryLinks
Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to store and retrieve information cheaply.
☆333Updated 5 months ago
Alternatives and similar repositories for memory
Users that are interested in memory are comparing it to the libraries listed below
Sorting:
- prime-rl is a codebase for decentralized async RL training at scale☆311Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆367Updated this week
- ☆178Updated 5 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆391Updated 6 months ago
- Tina: Tiny Reasoning Models via LoRA☆245Updated this week
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆248Updated this week
- PyTorch implementation of models from the Zamba2 series.☆181Updated 4 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆237Updated 4 months ago
- SkyRL-v0: Train Real-World Long-Horizon Agents via Reinforcement Learning☆343Updated last week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆301Updated last month
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 10 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆345Updated 2 weeks ago
- Scalable toolkit for efficient model reinforcement☆361Updated this week
- ☆188Updated 3 months ago
- Pretraining code for a large-scale depth-recurrent language model☆770Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆171Updated 4 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆170Updated 5 months ago
- Build your own visual reasoning model☆370Updated this week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆221Updated 3 weeks ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆166Updated last week
- Exploring Applications of GRPO☆229Updated 2 weeks ago
- A project to improve skills of large language models☆413Updated this week
- Reproducible, flexible LLM evaluations☆203Updated 3 weeks ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆417Updated 2 weeks ago
- Normalized Transformer (nGPT)☆181Updated 6 months ago
- ☆450Updated this week
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆155Updated last month
- An extension of the nanoGPT repository for training small MOE models.☆147Updated 2 months ago
- PyTorch building blocks for the OLMo ecosystem☆222Updated this week
- ☆554Updated last month