QwenLM / ConsisEvalLinks
☆11Updated last year
Alternatives and similar repositories for ConsisEval
Users that are interested in ConsisEval are comparing it to the libraries listed below
Sorting:
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"☆59Updated last year
- Critique-out-Loud Reward Models☆70Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- [EMNLP'24] LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆31Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆179Updated 6 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆178Updated 2 months ago
- ☆112Updated this week
- ☆25Updated 3 years ago
- [ICLR2025 Spotlight] Agent Trajectory Synthesis via Guiding Replay with Web Tutorials☆46Updated 9 months ago
- [COLM 2025] EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees☆28Updated 4 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated last month
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- [ICML'24] TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks☆30Updated last year
- ☆58Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 6 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆56Updated 9 months ago
- Evaluate the Quality of Critique☆36Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆120Updated last month
- ☆29Updated 9 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆116Updated 6 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆31Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 5 months ago
- ☆69Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆240Updated 2 months ago
- ☆105Updated last year
- [NeurIPS'25 D&B] Mind2Web-2 Benchmark: Evaluating Agentic Search with Agent-as-a-Judge☆89Updated 3 weeks ago