Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.
☆416Feb 12, 2024Updated 2 years ago
Alternatives and similar repositories for Selective_Context
Users that are interested in Selective_Context are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆335Sep 9, 2024Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆6,085Apr 8, 2026Updated 3 weeks ago
- ☆23Jan 16, 2025Updated last year
- ☆14Nov 20, 2022Updated 3 years ago
- ☆14Jul 5, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- The repo for In-context Autoencoder☆169May 11, 2024Updated last year
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Nov 6, 2023Updated 2 years ago
- ☆55Apr 18, 2026Updated 2 weeks ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Jul 24, 2023Updated 2 years ago
- Code for "Retaining Key Information under High Compression Rates: Query-Guided Compressor for LLMs" (ACL 2024)☆19Jun 12, 2024Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- ☆19Mar 10, 2025Updated last year
- ☆922May 22, 2024Updated last year
- This repository contains the official code for the paper: "Prompt Injection: Parameterization of Fixed Inputs"☆32Sep 13, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆66Feb 21, 2025Updated last year
- ☆21Aug 9, 2024Updated last year
- [NAACL 2025 Findings] Code for "Perception Compressor: A Training-Free Prompt Compression Framework in Long Context Scenarios"☆27Mar 5, 2025Updated last year
- Code for the paper "Self-Detoxifying Language Models via Toxification Reversal" (EMNLP 2023)☆18Oct 17, 2023Updated 2 years ago
- [ACL 2025 Main] Repository for the paper: 500xCompressor: Generalized Prompt Compression for Large Language Models☆62Mar 9, 2026Updated last month
- ☆16Mar 30, 2024Updated 2 years ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated 2 years ago
- ☆16Sep 4, 2025Updated 8 months ago
- Official code repository for Sketch-of-Thought (SoT)☆137May 8, 2025Updated 11 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Tools for merging pretrained large language models.☆7,052Mar 15, 2026Updated last month
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,692Aug 14, 2024Updated last year
- The official repository for MGFiD (NAACL 2024 Findings)☆15Jul 27, 2024Updated last year
- uvx is now uvenv☆16Dec 4, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆41Aug 4, 2023Updated 2 years ago
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆51Dec 7, 2024Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆315Feb 14, 2025Updated last year
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A toolkit to induce interpretable workflows from raw computer-use activities.☆43Nov 13, 2025Updated 5 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,123Oct 7, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- Repository for “PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision Makers”, NAACL24☆154Jun 16, 2024Updated last year
- Towards Efficient Shapley Value Estimation via Cross-contribution Maximization☆14Jul 8, 2022Updated 3 years ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,368Updated this week
- RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation.☆148Jan 6, 2026Updated 3 months ago