booydar / LM-RMTLinks
Recurrent Memory Transformer
☆155Updated 2 years ago
Alternatives and similar repositories for LM-RMT
Users that are interested in LM-RMT are comparing it to the libraries listed below
Sorting:
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- ☆67Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Updated last year
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- Scaling Data-Constrained Language Models☆342Updated 7 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Updated last year
- DSIR large-scale data selection framework for language model training☆269Updated last year
- Efficient Transformers with Dynamic Token Pooling☆67Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆204Updated last year
- ☆98Updated 2 years ago
- ☆158Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆252Updated 3 weeks ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Simple next-token-prediction for RLHF☆228Updated 2 years ago
- Sequence modeling with Mega.☆303Updated 3 years ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆90Updated 3 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Updated 2 years ago
- ☆259Updated 8 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- ☆167Updated 2 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆224Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆151Updated 2 years ago
- ☆273Updated 2 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆247Updated 8 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago