Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.
☆411Feb 12, 2024Updated 2 years ago
Alternatives and similar repositories for Selective_Context
Users that are interested in Selective_Context are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆333Sep 9, 2024Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,937Oct 28, 2025Updated 4 months ago
- ☆21Jan 16, 2025Updated last year
- ☆14Nov 20, 2022Updated 3 years ago
- ☆14Jul 5, 2024Updated last year
- The repo for In-context Autoencoder☆166May 11, 2024Updated last year
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Nov 6, 2023Updated 2 years ago
- ☆54Updated this week
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Jul 24, 2023Updated 2 years ago
- Code for "Retaining Key Information under High Compression Rates: Query-Guided Compressor for LLMs" (ACL 2024)☆18Jun 12, 2024Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- ☆19Mar 10, 2025Updated last year
- This repository contains the official code for the paper: "Prompt Injection: Parameterization of Fixed Inputs"☆32Sep 13, 2024Updated last year
- ☆921May 22, 2024Updated last year
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆65Feb 21, 2025Updated last year
- ☆21Aug 9, 2024Updated last year
- Code for the paper "Self-Detoxifying Language Models via Toxification Reversal" (EMNLP 2023)☆18Oct 17, 2023Updated 2 years ago
- ☆16Mar 30, 2024Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- ☆16Sep 4, 2025Updated 6 months ago
- Official code repository for Sketch-of-Thought (SoT)☆136May 8, 2025Updated 10 months ago
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,697Aug 14, 2024Updated last year
- The official repository for MGFiD (NAACL 2024 Findings)☆15Jul 27, 2024Updated last year
- uvx is now uvenv☆15Dec 4, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆41Aug 4, 2023Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆311Feb 14, 2025Updated last year
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆50Dec 7, 2024Updated last year
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,322Mar 6, 2025Updated last year
- A toolkit to induce interpretable workflows from raw computer-use activities.☆42Nov 13, 2025Updated 4 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,111Oct 7, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- Repository for “PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision Makers”, NAACL24☆153Jun 16, 2024Updated last year
- Towards Efficient Shapley Value Estimation via Cross-contribution Maximization☆14Jul 8, 2022Updated 3 years ago
- RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation.☆145Jan 6, 2026Updated 2 months ago
- The official data and code for EMNLP 2023 main conference paper: CRT-QA: A Dataset of Complex Reasoning Question Answering over Tabular D…☆13May 19, 2025Updated 10 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆173Jul 4, 2024Updated last year
- ☆13Jun 28, 2021Updated 4 years ago