GAIR-NLP / scaleeval
Scalable Meta-Evaluation of LLMs as Evaluators
☆42Updated last year
Alternatives and similar repositories for scaleeval:
Users that are interested in scaleeval are comparing it to the libraries listed below
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆139Updated 6 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 8 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆72Updated 6 months ago
- ☆69Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆124Updated 10 months ago
- ☆43Updated 8 months ago
- Evaluate the Quality of Critique☆34Updated 11 months ago
- Critique-out-Loud Reward Models☆64Updated 6 months ago
- ☆62Updated last month
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆33Updated 4 months ago
- ☆97Updated 10 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 11 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆51Updated 2 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 8 months ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 3 months ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆35Updated 8 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆45Updated 5 months ago
- Implementation of "SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models"☆27Updated 2 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 3 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 2 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated 10 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆94Updated 3 weeks ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆44Updated 10 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated 10 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆85Updated last month
- ☆59Updated 8 months ago