booydar / LM-RMTLinks
Recurrent Memory Transformer
☆150Updated 2 years ago
Alternatives and similar repositories for LM-RMT
Users that are interested in LM-RMT are comparing it to the libraries listed below
Sorting:
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated 11 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆413Updated 7 months ago
- ☆66Updated 11 months ago
- Simple next-token-prediction for RLHF☆227Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆229Updated 11 months ago
- Scaling Data-Constrained Language Models☆339Updated last month
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆132Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆258Updated last year
- ☆159Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆163Updated 3 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆220Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- ☆96Updated 2 years ago
- ☆135Updated 9 months ago
- Efficient Transformers with Dynamic Token Pooling☆63Updated 2 years ago
- [NeurIPS 2023] Learning Transformer Programs☆163Updated last year
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆203Updated 11 months ago
- Sequence modeling with Mega.☆297Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- ☆269Updated last year
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆139Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- ☆180Updated 2 years ago
- RLHF implementation details of OAI's 2019 codebase☆189Updated last year
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆86Updated 2 years ago