openai / human-eval
Code for the paper "Evaluating Large Language Models Trained on Code"
☆2,529Updated last week
Alternatives and similar repositories for human-eval:
Users that are interested in human-eval are comparing it to the libraries listed below
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,346Updated 2 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,291Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,644Updated 5 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,811Updated 2 weeks ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,248Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,625Updated last month
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,673Updated last year
- A framework for the evaluation of autoregressive code generation language models.☆873Updated 2 months ago
- Alpaca dataset from Stanford, cleaned and curated☆1,532Updated last year
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09…☆2,028Updated this week
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,349Updated last month
- ☆1,491Updated 3 months ago
- Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)☆1,680Updated 10 months ago
- [NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning☆2,562Updated 2 weeks ago
- Tools for merging pretrained large language models.☆5,157Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,568Updated this week
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆903Updated 3 months ago
- ☆1,165Updated last year
- Instruction Tuning with GPT-4☆4,259Updated last year
- A quick guide (especially) for trending instruction finetuning datasets☆2,798Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆2,937Updated 6 months ago
- ☆2,727Updated this week
- Toolkit for creating, sharing and using natural language prompts.☆2,747Updated last year
- A framework for few-shot evaluation of language models.☆7,576Updated this week
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,342Updated 3 weeks ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,650Updated last week
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆1,996Updated 6 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,405Updated 9 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,570Updated last year
- ☆1,447Updated last year