Run evaluation on LLMs using human-eval benchmark
☆427Sep 12, 2023Updated 2 years ago
Alternatives and similar repositories for code-eval
Users that are interested in code-eval are comparing it to the libraries listed below
Sorting:
- ☆85Jun 13, 2023Updated 2 years ago
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,688Oct 2, 2025Updated 4 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Jul 12, 2023Updated 2 years ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,137Jan 17, 2025Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆478Feb 5, 2025Updated last year
- A framework for the evaluation of autoregressive code generation language models.☆1,020Jul 22, 2025Updated 7 months ago
- ☆74Sep 5, 2023Updated 2 years ago
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,481May 1, 2025Updated 10 months ago
- Open Source WizardCoder Dataset☆164Jul 12, 2023Updated 2 years ago
- Self-evaluating interview for AI coders☆601Jun 21, 2025Updated 8 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆552Mar 10, 2024Updated last year
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆154Dec 25, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,476Jun 7, 2025Updated 8 months ago
- ☆415Nov 2, 2023Updated 2 years ago
- ☆1,504May 12, 2023Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- APPS: Automated Programming Progress Standard (NeurIPS 2021)☆510Jun 19, 2024Updated last year
- Generate the WizardCoder Instruct from the CodeAlpaca☆21Jun 27, 2023Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆725Jan 31, 2025Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,630Sep 15, 2023Updated 2 years ago
- assign color hues to a collection of text fragments based on embeddings☆20Jun 15, 2024Updated last year
- Run inference on MPT-30B using CPU☆576Jun 30, 2023Updated 2 years ago
- ☆63Sep 23, 2024Updated last year
- The code and data for the paper JiuZhang3.0☆49May 26, 2024Updated last year
- ☆135Nov 24, 2023Updated 2 years ago
- Benchmark results from code generation with LLMs☆17Sep 1, 2023Updated 2 years ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Dec 22, 2023Updated 2 years ago
- Customizable implementation of the self-instruct paper.☆1,049Mar 7, 2024Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,557May 28, 2023Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Jan 7, 2024Updated 2 years ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆724Oct 11, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated 2 weeks ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 6 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆189Aug 16, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,660Mar 8, 2024Updated last year
- Visual RAG using less than 300 lines of code.☆30Mar 2, 2024Updated last year
- Fine-tune SantaCoder for Code/Text Generation.☆196Apr 11, 2023Updated 2 years ago