LCM-Lab / LOOM-ScopeLinks
A comprehensive and efficient long-context model evaluation framework
☆18Updated last month
Alternatives and similar repositories for LOOM-Scope
Users that are interested in LOOM-Scope are comparing it to the libraries listed below
Sorting:
- ☆16Updated 2 weeks ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆13Updated 7 months ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆18Updated 11 months ago
- ☆34Updated last month
- [ACM MM25] LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models☆19Updated 5 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 9 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆71Updated last week
- ☆13Updated 8 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆30Updated last month
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆40Updated 7 months ago
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆13Updated 3 weeks ago
- Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"☆22Updated 4 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆15Updated 10 months ago
- Emergent Hierarchical Reasoning in LLMs/VLMs through Reinforcement Learning☆25Updated 2 weeks ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆22Updated last month
- Codebase for Instruction Following without Instruction Tuning☆35Updated last year
- ☆24Updated last week
- ☆22Updated last year
- Code for "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆18Updated 5 months ago
- ☆19Updated 6 months ago
- TARS: MinMax Token-Adaptive Preference Strategy for Hallucination Reduction in MLLMs☆21Updated this week
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".☆19Updated 7 months ago
- DCPO: Dynamic Adaptive Clipping for RL☆32Updated 2 weeks ago
- ☆45Updated last week
- ☆18Updated last month
- ☆16Updated last year
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆22Updated last month
- ☆48Updated 7 months ago