[EMNLP 2023] Adapting Language Models to Compress Long Contexts
☆331Sep 9, 2024Updated last year
Alternatives and similar repositories for AutoCompressors
Users that are interested in AutoCompressors are comparing it to the libraries listed below
Sorting:
- The repo for In-context Autoencoder☆164May 11, 2024Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆306Feb 14, 2025Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆169Jun 13, 2024Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆511Oct 9, 2024Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Jun 15, 2024Updated last year
- Recurrent Memory Transformer☆155Aug 14, 2023Updated 2 years ago
- Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.☆409Feb 12, 2024Updated 2 years ago
- Long Context Extension and Generalization in LLMs☆63Sep 21, 2024Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,101Jan 15, 2025Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆426Dec 20, 2023Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆446Oct 16, 2024Updated last year
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆378Sep 25, 2024Updated last year
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Nov 6, 2023Updated 2 years ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Jul 24, 2023Updated 2 years ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆642Mar 4, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,064Mar 7, 2024Updated last year
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,920Updated this week
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆37Jun 10, 2024Updated last year
- [ICLR 2023] Codebase for Copy-Generator model, including an implementation of kNN-LM☆190Jan 27, 2025Updated last year
- KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches. EMNLP Findings 2024☆88Feb 27, 2025Updated last year
- ☆301Jul 10, 2025Updated 7 months ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an…☆286Oct 20, 2022Updated 3 years ago
- ☆25Oct 20, 2022Updated 3 years ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆172Jul 4, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆179Jul 12, 2024Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆195Oct 8, 2024Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Nov 11, 2024Updated last year
- RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation.☆145Jan 6, 2026Updated last month
- Code and Checkpoints for "Generate rather than Retrieve: Large Language Models are Strong Context Generators" in ICLR 2023.☆291Jan 29, 2023Updated 3 years ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆541Jan 17, 2025Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 5 months ago
- ☆51Jan 28, 2024Updated 2 years ago
- ☆187Jul 2, 2025Updated 8 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Mar 30, 2024Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,190Aug 17, 2024Updated last year