Source code for the paper "LongGenBench: Long-context Generation Benchmark"
☆23Oct 8, 2024Updated last year
Alternatives and similar repositories for LongGenBench
Users that are interested in LongGenBench are comparing it to the libraries listed below
Sorting:
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 9 months ago
- Implementation of AdaCQR(COLING 2025)☆13Dec 30, 2024Updated last year
- ☆12Sep 1, 2023Updated 2 years ago
- [WWW 2026] BaiJia: An Open Role-Playing Platform of Chinese Historical Characters☆25Jan 14, 2026Updated 2 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆48Jun 19, 2024Updated last year
- ☆16Feb 8, 2024Updated 2 years ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 3 months ago
- KV cache compression for high-throughput LLM inference☆153Feb 5, 2025Updated last year
- ☆16Feb 24, 2026Updated 3 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆377Jul 10, 2025Updated 8 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆379Sep 25, 2024Updated last year
- ☆306Jul 10, 2025Updated 8 months ago
- Imperative deep learning framework with customized GPU and CPU backend☆29Jul 25, 2023Updated 2 years ago
- End to End steps for adding custom ops in PyTorch.☆24Aug 20, 2020Updated 5 years ago
- ☆33Sep 29, 2021Updated 4 years ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Feb 17, 2025Updated last year
- A selective knowledge distillation algorithm for efficient speculative decoders☆36Nov 27, 2025Updated 3 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Oct 29, 2024Updated last year
- ☆39Sep 13, 2025Updated 6 months ago
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Oct 21, 2024Updated last year
- Examples for MS-AMP package.☆30Jul 17, 2025Updated 8 months ago
- [XLLM@ACL2025] Official Code for "Less is More: Enhancing Structured Multi-Agent Reasoning via Quality-Guided Distillation"☆23Jul 29, 2025Updated 7 months ago
- ☆11Jan 17, 2024Updated 2 years ago
- Official code and resources for the paper "EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation."☆23Dec 23, 2024Updated last year
- Official implementation of "Diffusion Language Models Know the Answer Before Decoding"☆48Sep 8, 2025Updated 6 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Nov 9, 2024Updated last year
- ☆13Mar 9, 2024Updated 2 years ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆77Mar 8, 2024Updated 2 years ago
- The official github repo for the open online courses: "Dive into LLMs".☆10Mar 15, 2024Updated 2 years ago
- ☆16Jul 12, 2024Updated last year
- Fast and memory-efficient exact attention☆20Mar 13, 2026Updated last week
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆29Apr 2, 2025Updated 11 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- ☆24May 6, 2022Updated 3 years ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆18Feb 29, 2024Updated 2 years ago
- Code accompanying our EMNLP 2019 paper: "Revisiting the Evaluation of Theory of Mind through Question Answering"☆26Aug 9, 2020Updated 5 years ago
- The official implementation of dLLM-Var☆31Nov 6, 2025Updated 4 months ago
- ☆36Feb 12, 2025Updated last year
- Implement some method of LLM KV Cache Sparsity☆41Jun 6, 2024Updated last year