princeton-nlp / AutoCompressorsLinks
[EMNLP 2023] Adapting Language Models to Compress Long Contexts
☆328Updated last year
Alternatives and similar repositories for AutoCompressors
Users that are interested in AutoCompressors are comparing it to the libraries listed below
Sorting:
- DSIR large-scale data selection framework for language model training☆269Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆373Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆484Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆194Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆373Updated 2 years ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆535Updated last year
- ☆273Updated 2 years ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆391Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆229Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆111Updated 11 months ago
- ☆294Updated 2 years ago
- ☆313Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆588Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆146Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆306Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆350Updated 2 years ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆361Updated 2 years ago
- ☆322Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆201Updated 2 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆409Updated last year
- Datasets for Instruction Tuning of Large Language Models☆261Updated 2 years ago
- Generative Judge for Evaluating Alignment☆250Updated 2 years ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆246Updated 5 months ago
- The repo for In-context Autoencoder☆164Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆225Updated 7 months ago