open-compass / CriticEvalLinks
[NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs
☆48Updated last year
Alternatives and similar repositories for CriticEval
Users that are interested in CriticEval are comparing it to the libraries listed below
Sorting:
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆51Updated last year
- The code and data for the paper JiuZhang3.0☆49Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆67Updated last year
- This the implementation of LeCo☆31Updated 11 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆52Updated 7 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Updated 10 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 10 months ago
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"☆59Updated 2 years ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆156Updated 6 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 3 months ago
- ☆58Updated last year
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆182Updated 5 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 7 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Updated last year
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆61Updated 8 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 8 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆52Updated 5 months ago
- The official repository of the Omni-MATH benchmark.☆91Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆73Updated 5 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆128Updated 7 months ago
- Reformatted Alignment☆111Updated last year
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆73Updated 7 months ago
- ☆107Updated last month
- Evaluate the Quality of Critique☆36Updated last year