suzgunmirac / BIG-Bench-HardView external linksLinks
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
☆548Jun 25, 2024Updated last year
Alternatives and similar repositories for BIG-Bench-Hard
Users that are interested in BIG-Bench-Hard are comparing it to the libraries listed below
Sorting:
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,768Aug 4, 2024Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,203Jul 19, 2024Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,552May 28, 2023Updated 2 years ago
- ☆772Jun 13, 2024Updated last year
- ☆1,559Feb 5, 2026Updated last week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,946Aug 9, 2025Updated 6 months ago
- ☆1,390Jan 21, 2024Updated 2 years ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,092Jun 1, 2023Updated 2 years ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,571Mar 27, 2023Updated 2 years ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,100Oct 5, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,393Updated this week
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆391Jul 9, 2024Updated last year
- Paper List for In-context Learning 🌷☆875Oct 8, 2024Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆588Dec 9, 2024Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,812Jul 27, 2025Updated 6 months ago
- Expanding natural instructions☆1,030Dec 11, 2023Updated 2 years ago
- AllenAI's post-training codebase☆3,573Updated this week
- A modular RL library to fine-tune language models to human preferences☆2,377Mar 1, 2024Updated last year
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆354Jun 18, 2023Updated 2 years ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,162Feb 8, 2026Updated last week
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Apr 28, 2022Updated 3 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Mar 10, 2024Updated last year
- Prod Env☆439Oct 9, 2023Updated 2 years ago
- The MATH Dataset (NeurIPS 2021)☆1,301Sep 6, 2025Updated 5 months ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,127Jan 17, 2025Updated last year
- RewardBench: the first evaluation tool for reward models.☆687Jan 31, 2026Updated 2 weeks ago
- A prize for finding tasks that cause large language models to show inverse scaling☆620Oct 11, 2023Updated 2 years ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,667Feb 9, 2026Updated last week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,814Jun 17, 2025Updated 8 months ago
- Instruction Tuning with GPT-4☆4,340Jun 11, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 4 months ago
- [NeurlPS D&B 2024] Generative AI for Math: MathPile☆418Apr 4, 2025Updated 10 months ago
- Toolkit for creating, sharing and using natural language prompts.☆2,997Oct 23, 2023Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆554Feb 12, 2024Updated 2 years ago
- GAOGAO-Bench-Updates is a supplement to the GAOKAO-Bench, a dataset to evaluate large language models.☆38Jan 7, 2025Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- Tasks for describing differences between text distributions.☆17Aug 9, 2024Updated last year
- ☆921May 22, 2024Updated last year
- ☆290Dec 2, 2022Updated 3 years ago