GAIR-NLP / scaleeval
Scalable Meta-Evaluation of LLMs as Evaluators
☆42Updated 11 months ago
Alternatives and similar repositories for scaleeval:
Users that are interested in scaleeval are comparing it to the libraries listed below
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 5 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated 11 months ago
- ☆51Updated 2 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆131Updated 3 months ago
- ☆40Updated 3 months ago
- Evaluate the Quality of Critique☆35Updated 7 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆66Updated 2 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 11 months ago
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆91Updated last month
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆68Updated 3 weeks ago
- Critique-out-Loud Reward Models☆48Updated 3 months ago
- ☆64Updated 11 months ago
- Reformatted Alignment☆113Updated 4 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 4 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆42Updated last month
- ☆58Updated 4 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆48Updated 3 months ago
- ☆87Updated last week
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 4 months ago
- ☆67Updated last month
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆82Updated last year
- Reproducible, flexible LLM evaluations☆129Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆98Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆106Updated 6 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆41Updated last week
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆66Updated 7 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year