WHGTyen / BIG-Bench-MistakeLinks
A dataset of LLM-generated chain-of-thought steps annotated with mistake location.
☆81Updated last year
Alternatives and similar repositories for BIG-Bench-Mistake
Users that are interested in BIG-Bench-Mistake are comparing it to the libraries listed below
Sorting:
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 9 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆147Updated 10 months ago
- ☆50Updated last year
- Self-Alignment with Principle-Following Reward Models☆165Updated 4 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆113Updated last week
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
- ☆100Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated 11 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- Syntax Error-Free and Generalizable Tool Use for LLMs via Finite-State Decoding☆27Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆113Updated 7 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆103Updated 6 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 3 months ago
- ☆93Updated 4 months ago
- Evaluate the Quality of Critique☆36Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 11 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 3 months ago
- ☆74Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆30Updated last year
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"☆58Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆127Updated 11 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆49Updated last month
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆160Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year