WHGTyen / BIG-Bench-MistakeLinks
A dataset of LLM-generated chain-of-thought steps annotated with mistake location.
☆82Updated last year
Alternatives and similar repositories for BIG-Bench-Mistake
Users that are interested in BIG-Bench-Mistake are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated 11 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆123Updated 10 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
- Self-Alignment with Principle-Following Reward Models☆166Updated 2 weeks ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆102Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated last year
- ☆52Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 3 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆105Updated 6 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆108Updated 7 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 4 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆87Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 11 months ago
- Evaluate the Quality of Critique☆36Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆49Updated last month
- ☆74Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆120Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆54Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆52Updated last year
- ☆100Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated this week
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 8 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- ☆150Updated last year
- Contrastive Chain-of-Thought Prompting☆68Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year