openai / human-eval
Code for the paper "Evaluating Large Language Models Trained on Code"
☆2,616Updated last month
Alternatives and similar repositories for human-eval:
Users that are interested in human-eval are comparing it to the libraries listed below
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,686Updated 7 months ago
- A framework for the evaluation of autoregressive code generation language models.☆902Updated 4 months ago
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09…☆2,099Updated this week
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,423Updated last month
- [NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning☆2,613Updated 2 months ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,304Updated last year
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,394Updated 2 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,336Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,885Updated 2 months ago
- [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models☆2,330Updated last year
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆2,007Updated 7 months ago
- Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)☆1,751Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,944Updated last year
- Toolkit for creating, sharing and using natural language prompts.☆2,792Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,701Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,596Updated last year
- ☆734Updated 9 months ago
- ☆634Updated 4 months ago
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,397Updated last year
- ☆1,506Updated last month
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,681Updated 2 months ago
- A framework for few-shot evaluation of language models.☆8,213Updated this week
- prompt2model - Generate Deployable Models from Natural Language Instructions☆1,982Updated 2 months ago
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,043Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,410Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,749Updated last month
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,469Updated last month
- Instruction Tuning with GPT-4☆4,279Updated last year
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.☆4,920Updated 3 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆920Updated 4 months ago