abacaj / code-evalLinks
Run evaluation on LLMs using human-eval benchmark
β424Updated 2 years ago
Alternatives and similar repositories for code-eval
Users that are interested in code-eval are comparing it to the libraries listed below
Sorting:
- π OctoPack: Instruction Tuning Code Large Language Modelsβ475Updated 9 months ago
- Open Source WizardCoder Datasetβ162Updated 2 years ago
- β277Updated 2 years ago
- A framework for the evaluation of autoregressive code generation language models.β1,002Updated 4 months ago
- β671Updated last year
- Fine-tune SantaCoder for Code/Text Generation.β194Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β552Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generationβ321Updated 9 months ago
- β84Updated 2 years ago
- Accepted by Transactions on Machine Learning Research (TMLR)β136Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextβ478Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β259Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β230Updated last year
- PaL: Program-Aided Language Models (ICML 2023)β517Updated 2 years ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β181Updated last year
- β313Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ555Updated 2 years ago
- β379Updated 2 years ago
- Official repository for LongChat and LongEvalβ532Updated last year
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]β376Updated last year
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRavβ¦β318Updated 2 years ago
- β481Updated last year
- β556Updated last year
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"β472Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β315Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Themβ530Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467β300Updated 9 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β164Updated 3 months ago
- A multi-programming language benchmark for LLMsβ283Updated 2 weeks ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.β758Updated last year