Run evaluation on LLMs using human-eval benchmark
☆428Sep 12, 2023Updated 2 years ago
Alternatives and similar repositories for code-eval
Users that are interested in code-eval are comparing it to the libraries listed below
Sorting:
- ☆85Jun 13, 2023Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,163Jan 17, 2025Updated last year
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,698Oct 2, 2025Updated 5 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Jul 12, 2023Updated 2 years ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆479Feb 5, 2025Updated last year
- A framework for the evaluation of autoregressive code generation language models.☆1,021Jul 22, 2025Updated 7 months ago
- Generate the WizardCoder Instruct from the CodeAlpaca☆21Jun 27, 2023Updated 2 years ago
- ☆74Sep 5, 2023Updated 2 years ago
- Open Source WizardCoder Dataset☆166Jul 12, 2023Updated 2 years ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆154Dec 25, 2024Updated last year
- APPS: Automated Programming Progress Standard (NeurIPS 2021)☆520Jun 19, 2024Updated last year
- ☆1,506May 12, 2023Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆552Mar 10, 2024Updated 2 years ago
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,479May 1, 2025Updated 10 months ago
- Self-evaluating interview for AI coders☆601Jun 21, 2025Updated 9 months ago
- Benchmark results from code generation with LLMs☆17Sep 1, 2023Updated 2 years ago
- Evaluation results of code generation LLMs☆31Sep 1, 2023Updated 2 years ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,478Jun 7, 2025Updated 9 months ago
- The code and data for the paper JiuZhang3.0☆49May 26, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆62Oct 21, 2024Updated last year
- ☆415Nov 2, 2023Updated 2 years ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,566May 28, 2023Updated 2 years ago
- assign color hues to a collection of text fragments based on embeddings☆20Jun 15, 2024Updated last year
- ☆283Apr 25, 2023Updated 2 years ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,629Sep 15, 2023Updated 2 years ago
- A multi-programming language benchmark for LLMs☆299Jan 28, 2026Updated last month
- Salesforce open-source LLMs with 8k sequence length.☆726Jan 31, 2025Updated last year
- A framework for few-shot evaluation of language models.☆11,704Mar 5, 2026Updated 2 weeks ago
- Fine-tune SantaCoder for Code/Text Generation.☆196Apr 11, 2023Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- Visual Studio Code extension for WizardCoder☆149Aug 1, 2023Updated 2 years ago
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated 2 years ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Dec 22, 2023Updated 2 years ago
- ☆135Nov 24, 2023Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,769Aug 4, 2024Updated last year
- evol augment any dataset online☆61Aug 3, 2023Updated 2 years ago