GAIR-NLP / lm-open-science-evaluationLinks
Reproducible and flexible LLM evaluations for scientific reasoning.
☆25Updated 4 months ago
Alternatives and similar repositories for lm-open-science-evaluation
Users that are interested in lm-open-science-evaluation are comparing it to the libraries listed below
Sorting:
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 2 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 6 months ago
- The official repo of "WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents"☆89Updated 2 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆86Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 11 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- Benchmarking Benchmark Leakage in Large Language Models☆57Updated last year
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Updated last year
- ☆58Updated last year
- This the implementation of LeCo☆31Updated 10 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆64Updated last year
- ☆30Updated 11 months ago
- ☆69Updated last year
- [ICLR 2025] Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist☆33Updated last year
- instruction-following benchmark for large reasoning models☆45Updated 4 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆52Updated last year
- A comprehensive benchmark for evaluating deep research agents on academic survey tasks☆38Updated 3 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆180Updated 6 months ago
- Code and data for "Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?" (ACL 2024)☆32Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆25Updated 11 months ago
- Evaluate the Quality of Critique☆36Updated last year
- ☆108Updated 5 months ago