open-compass / CriticEvalLinks
[NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs
☆46Updated 9 months ago
Alternatives and similar repositories for CriticEval
Users that are interested in CriticEval are comparing it to the libraries listed below
Sorting:
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 10 months ago
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"☆58Updated last year
- The code and data for the paper JiuZhang3.0☆49Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated 9 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆69Updated 9 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 3 months ago
- This the implementation of LeCo☆31Updated 7 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆88Updated 5 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆83Updated 3 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 3 months ago
- ☆59Updated last year
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆64Updated 10 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆62Updated 9 months ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆52Updated last year
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆53Updated 4 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆89Updated last year
- ☆72Updated 6 months ago
- ☆53Updated 7 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆59Updated 10 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆104Updated 6 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- The official repository of the Omni-MATH benchmark.☆87Updated 8 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆110Updated 4 months ago
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆106Updated 3 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆133Updated 2 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆65Updated 3 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 11 months ago
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆50Updated last year
- General Reasoner: Advancing LLM Reasoning Across All Domains☆171Updated 3 months ago