Measuring Massive Multitask Language Understanding | ICLR 2021
☆1,566May 28, 2023Updated 2 years ago
Alternatives and similar repositories for test
Users that are interested in test are comparing it to the libraries listed below
Sorting:
- A framework for few-shot evaluation of language models.☆11,704Mar 5, 2026Updated 2 weeks ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆806Dec 6, 2024Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,215Jul 19, 2024Updated last year
- ☆1,405Jan 21, 2024Updated 2 years ago
- ☆771Jun 13, 2024Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,824Jul 27, 2025Updated 7 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆549Jun 25, 2024Updated last year
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,163Jan 17, 2025Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,708Mar 13, 2026Updated last week
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,769Aug 4, 2024Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆552Mar 10, 2024Updated 2 years ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,765Updated this week
- The MATH Dataset (NeurIPS 2021)☆1,321Sep 6, 2025Updated 6 months ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆891Jan 16, 2025Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,588Mar 27, 2023Updated 2 years ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,114Jan 15, 2025Updated last year
- AllenAI's post-training codebase☆3,629Updated this week
- Train transformer language models with reinforcement learning.☆17,697Updated this week
- Ongoing research training transformer models at scale☆15,647Updated this week
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,102Jun 1, 2023Updated 2 years ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆393Jul 9, 2024Updated last year
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆477Sep 30, 2024Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,745Nov 15, 2025Updated 4 months ago
- ☆1,560Updated this week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,827Jun 17, 2025Updated 9 months ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,258Jul 17, 2024Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,428Jun 2, 2025Updated 9 months ago
- Fast and memory-efficient exact attention☆22,832Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,739Jan 8, 2024Updated 2 years ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆728Jan 7, 2025Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,809Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- ☆4,398Jul 31, 2025Updated 7 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,233Aug 14, 2025Updated 7 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,929Dec 7, 2024Updated last year
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆18,014Nov 3, 2025Updated 4 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,238Feb 8, 2026Updated last month
- Toolkit for creating, sharing and using natural language prompts.☆3,006Oct 23, 2023Updated 2 years ago