Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.
☆413Feb 12, 2024Updated 2 years ago
Alternatives and similar repositories for Selective_Context
Users that are interested in Selective_Context are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆335Sep 9, 2024Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆6,003Updated this week
- ☆22Jan 16, 2025Updated last year
- ☆14Nov 20, 2022Updated 3 years ago
- ☆14Jul 5, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- The repo for In-context Autoencoder☆168May 11, 2024Updated last year
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Nov 6, 2023Updated 2 years ago
- ☆54Mar 31, 2026Updated 2 weeks ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Jul 24, 2023Updated 2 years ago
- Code for "Retaining Key Information under High Compression Rates: Query-Guided Compressor for LLMs" (ACL 2024)☆19Jun 12, 2024Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- ☆19Mar 10, 2025Updated last year
- End-to-End Neural Event Coreference Resolution☆11Jun 18, 2023Updated 2 years ago
- ☆920May 22, 2024Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- This repository contains the official code for the paper: "Prompt Injection: Parameterization of Fixed Inputs"☆32Sep 13, 2024Updated last year
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆66Feb 21, 2025Updated last year
- ☆21Aug 9, 2024Updated last year
- Code for the paper "Self-Detoxifying Language Models via Toxification Reversal" (EMNLP 2023)☆18Oct 17, 2023Updated 2 years ago
- [ACL 2025 Main] Repository for the paper: 500xCompressor: Generalized Prompt Compression for Large Language Models☆60Mar 9, 2026Updated last month
- ☆16Mar 30, 2024Updated 2 years ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- ☆16Sep 4, 2025Updated 7 months ago
- Official code repository for Sketch-of-Thought (SoT)☆137May 8, 2025Updated 11 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Tools for merging pretrained large language models.☆6,973Mar 15, 2026Updated last month
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,695Aug 14, 2024Updated last year
- The official repository for MGFiD (NAACL 2024 Findings)☆15Jul 27, 2024Updated last year
- uvx is now uvenv☆16Dec 4, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆41Aug 4, 2023Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆315Feb 14, 2025Updated last year
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆51Dec 7, 2024Updated last year
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated 11 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,327Mar 6, 2025Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- A toolkit to induce interpretable workflows from raw computer-use activities.☆42Nov 13, 2025Updated 5 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,115Oct 7, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- Repository for “PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision Makers”, NAACL24☆154Jun 16, 2024Updated last year
- Towards Efficient Shapley Value Estimation via Cross-contribution Maximization☆14Jul 8, 2022Updated 3 years ago
- RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation.☆147Jan 6, 2026Updated 3 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆174Jul 4, 2024Updated last year