open-compass / CompassVerifierLinks
[EMNLP 2025] CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward
☆58Updated 4 months ago
Alternatives and similar repositories for CompassVerifier
Users that are interested in CompassVerifier are comparing it to the libraries listed below
Sorting:
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 10 months ago
- The code and data for the paper JiuZhang3.0☆49Updated last year
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆131Updated last month
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆182Updated 4 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆118Updated 7 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆84Updated 8 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆154Updated 5 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆124Updated 8 months ago
- WideSearch: Benchmarking Agentic Broad Info-Seeking☆104Updated 2 months ago
- The official repository of the Omni-MATH benchmark.☆88Updated 11 months ago
- The official repo of "WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents"☆89Updated 2 months ago
- ☆86Updated 4 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆80Updated 2 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆95Updated 8 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆60Updated 6 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆119Updated last year
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆141Updated last month
- ☆58Updated last year
- ☆108Updated 5 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated last year
- ☆78Updated 9 months ago
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆49Updated 2 years ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆64Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆38Updated last year
- Scaling Agentic Reinforcement Learning with a Multi-Turn, Multi-Task Framework☆154Updated this week