liyucheng09 / Selective_ContextLinks
Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.
☆390Updated last year
Alternatives and similar repositories for Selective_Context
Users that are interested in Selective_Context are comparing it to the libraries listed below
Sorting:
- FireAct: Toward Language Agent Fine-tuning☆281Updated last year
- ☆311Updated last year
- ☆270Updated 2 years ago
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning☆229Updated 6 months ago
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆235Updated 11 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆115Updated 10 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆255Updated 3 weeks ago
- This is the official repo for "PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization". PromptAgen…☆305Updated 2 weeks ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆309Updated 10 months ago
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆368Updated this week
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆222Updated last month
- [Preprint] Learning to Filter Context for Retrieval-Augmented Generaton☆193Updated last year
- ☆182Updated 4 months ago
- Generative Judge for Evaluating Alignment☆244Updated last year
- ☆298Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆192Updated 8 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆187Updated last year
- Benchmark baseline for retrieval qa applications☆115Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆467Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆246Updated 9 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆272Updated last year
- Open Source WizardCoder Dataset☆159Updated 2 years ago
- ☆237Updated 11 months ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRav…☆316Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆343Updated 10 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆561Updated 7 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆388Updated last year
- ☆320Updated 10 months ago
- Build Hierarchical Autonomous Agents through Config. Collaborative Growth of Specialized Agents.☆320Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆253Updated last year