LCM-Lab / LOOM-EvalLinks
A comprehensive and efficient long-context model evaluation framework
☆28Updated this week
Alternatives and similar repositories for LOOM-Eval
Users that are interested in LOOM-Eval are comparing it to the libraries listed below
Sorting:
- Mixture-of-Basis-Experts for Compressing MoE-based LLMs☆25Updated 3 weeks ago
- Code and Model for NeurIPS 2024 Spotlight Paper "Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training…☆44Updated last year
- ☆19Updated last year
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated last year
- ☆15Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Updated 3 months ago
- ☆21Updated last month
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Updated 6 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆61Updated 11 months ago
- [NeurIPS 2025] Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"☆28Updated 3 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Updated 3 weeks ago
- [ICML'25] Official code of paper "Fast Large Language Model Collaborative Decoding via Speculation"☆28Updated 6 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆54Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 10 months ago
- ☆46Updated 3 months ago
- ☆57Updated last week
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34Updated 7 months ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆21Updated last year
- ☆85Updated 2 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Updated 2 months ago
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆57Updated 10 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆106Updated 8 months ago
- Evaluating the faithfulness of long-context language models☆30Updated last year
- ☆29Updated 7 months ago
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".☆21Updated 10 months ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Updated last year
- ☆53Updated 6 months ago
- ☆127Updated 7 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆76Updated last year
- ☆22Updated 7 months ago