Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"
☆817Jul 16, 2025Updated 8 months ago
Alternatives and similar repositories for LiveCodeBench
Users that are interested in LiveCodeBench are comparing it to the libraries listed below
Sorting:
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,698Oct 2, 2025Updated 5 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆485Jan 3, 2026Updated 2 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆168Oct 11, 2024Updated last year
- ☆234Feb 28, 2026Updated 2 weeks ago
- A framework for the evaluation of autoregressive code generation language models.☆1,021Jul 22, 2025Updated 7 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,478Updated this week
- Reproducing R1 for Code with Reliable Rewards☆295May 5, 2025Updated 10 months ago
- A multi-programming language benchmark for LLMs☆299Jan 28, 2026Updated last month
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆67Feb 3, 2025Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆650Jul 29, 2025Updated 7 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,100Mar 13, 2026Updated last week
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆191Aug 16, 2024Updated last year
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,163Jan 17, 2025Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,008Jun 21, 2025Updated 8 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆322Feb 24, 2025Updated last year
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,019Dec 22, 2024Updated last year
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆99Apr 9, 2025Updated 11 months ago
- Training and Benchmarking LLMs for Code Preference.☆38Nov 15, 2024Updated last year
- ☆56May 28, 2024Updated last year
- Democratizing Reinforcement Learning for LLMs☆5,219Mar 13, 2026Updated last week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆678Mar 16, 2025Updated last year
- ☆1,111Jan 10, 2026Updated 2 months ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆30Mar 5, 2024Updated 2 years ago
- EvoEval: Evolving Coding Benchmarks via LLM☆81Apr 6, 2024Updated last year
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆477Sep 30, 2024Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆175Aug 15, 2025Updated 7 months ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆478Feb 5, 2025Updated last year
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,919Updated this week
- A benchmark for LLMs on complicated tasks in the terminal☆1,732Jan 22, 2026Updated last month
- A framework for few-shot evaluation of language models.☆11,704Mar 5, 2026Updated 2 weeks ago
- Scalable toolkit for efficient model alignment☆851Oct 6, 2025Updated 5 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆267Oct 30, 2024Updated last year
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆347Feb 20, 2026Updated 3 weeks ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆254Jul 13, 2025Updated 8 months ago
- APPS: Automated Programming Progress Standard (NeurIPS 2021)☆518Jun 19, 2024Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆26Dec 23, 2024Updated last year
- A series of technical report on Slow Thinking with LLM☆761Aug 13, 2025Updated 7 months ago
- Benchmark ClassEval for class-level code generation.☆145Oct 24, 2024Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆62Oct 21, 2024Updated last year