liyucheng09 / Selective_ContextView external linksLinks
Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.
☆411Feb 12, 2024Updated 2 years ago
Alternatives and similar repositories for Selective_Context
Users that are interested in Selective_Context are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆328Sep 9, 2024Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,823Oct 28, 2025Updated 3 months ago
- ☆14Nov 20, 2022Updated 3 years ago
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated 9 months ago
- uvx is now uvenv☆15Dec 4, 2024Updated last year
- ☆19Mar 10, 2025Updated 11 months ago
- The repo for In-context Autoencoder☆164May 11, 2024Updated last year
- ☆921May 22, 2024Updated last year
- Repository for “PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision Makers”, NAACL24☆151Jun 16, 2024Updated last year
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆42Jul 7, 2025Updated 7 months ago
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 2 weeks ago
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Nov 6, 2023Updated 2 years ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,696Aug 14, 2024Updated last year
- Companion code to https://arxiv.org/abs/2409.03797v2☆19Sep 18, 2025Updated 4 months ago
- [ACL'25 (Findings)] Explorer: Scaling Exploration-driven Web Trajectory Synthesis for Multimodal Web Agents☆26Oct 15, 2025Updated 3 months ago
- Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"☆13Sep 17, 2025Updated 4 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆91Feb 14, 2025Updated last year
- ☆50Jan 28, 2025Updated last year
- 1-Click is all you need.☆63Apr 29, 2024Updated last year
- ☆24Jan 30, 2025Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆28Jul 13, 2023Updated 2 years ago
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,038May 31, 2024Updated last year
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆50Dec 7, 2024Updated last year
- GRadient-INformed MoE☆264Sep 25, 2024Updated last year
- Dateset Reset Policy Optimization☆31Apr 12, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Dec 6, 2024Updated last year
- A simple and effective LLM pruning approach.☆847Aug 9, 2024Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,105Oct 7, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 10 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆172Jul 4, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- Official code repository for Sketch-of-Thought (SoT)☆135May 8, 2025Updated 9 months ago
- My fork os allen AI's OLMo for educational purposes.☆29Dec 5, 2024Updated last year
- [ACL 2025 Main] Repository for the paper: 500xCompressor: Generalized Prompt Compression for Large Language Models☆56Jun 11, 2025Updated 8 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- ☆145Sep 12, 2025Updated 5 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,183Sep 30, 2025Updated 4 months ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Jul 24, 2023Updated 2 years ago
- ☆16Jul 23, 2024Updated last year