booydar / LM-RMT
Recurrent Memory Transformer
☆149Updated last year
Related projects ⓘ
Alternatives and complementary repositories for LM-RMT
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆394Updated this week
- DSIR large-scale data selection framework for language model training☆230Updated 7 months ago
- Scaling Data-Constrained Language Models☆321Updated last month
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- ☆247Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆277Updated 2 months ago
- Self-Alignment with Principle-Following Reward Models☆148Updated 8 months ago
- Chain-of-Hindsight, A Scalable RLHF Method☆220Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆169Updated 2 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆384Updated 6 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆307Updated 7 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆127Updated 6 months ago
- ☆158Updated last year
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆126Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆199Updated 6 months ago
- RLHF implementation details of OAI's 2019 codebase☆152Updated 10 months ago
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated 3 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆205Updated 3 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆127Updated 2 months ago
- ☆94Updated last year
- ☆132Updated last year
- A pipeline to improve skills of large language models☆191Updated this week
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆213Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆152Updated last week
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆160Updated 3 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆106Updated 3 months ago