LiveCodeBench / LiveCodeBench
Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"
β311Updated 2 weeks ago
Alternatives and similar repositories for LiveCodeBench:
Users that are interested in LiveCodeBench are comparing it to the libraries listed below
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β233Updated 3 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGIβ286Updated this week
- π OctoPack: Instruction Tuning Code Large Language Modelsβ450Updated last week
- A multi-programming language benchmark for LLMsβ223Updated 3 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gymβ325Updated last month
- The official evaluation suite and dynamic data release for MixEval.β231Updated 3 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generationβ294Updated 3 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ125Updated 4 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β205Updated 9 months ago
- Codes for the paper "βBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718β306Updated 4 months ago
- π Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Papβ¦β141Updated 2 months ago
- β255Updated 6 months ago
- A simple unified framework for evaluating LLMsβ195Updated last week
- β305Updated 8 months ago
- Run evaluation on LLMs using human-eval benchmarkβ395Updated last year
- β345Updated 2 weeks ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.β142Updated last week
- An Analytical Evaluation Board of Multi-turn LLM Agentsβ279Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.β503Updated this week
- A project to improve skills of large language modelsβ247Updated this week
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β144Updated 6 months ago
- Enhancing AI Software Engineering with Repository-level Code Graphβ131Updated last month
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β296Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Usersβ215Updated 3 months ago
- πΎ OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.β182Updated last week
- Open Source WizardCoder Datasetβ156Updated last year
- Code for the curation of The Stack v2 and StarCoder2 training dataβ95Updated 10 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β129Updated 6 months ago
- A Comprehensive Benchmark for Software Development.β91Updated 8 months ago