Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"
☆803Jul 16, 2025Updated 7 months ago
Alternatives and similar repositories for LiveCodeBench
Users that are interested in LiveCodeBench are comparing it to the libraries listed below
Sorting:
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,688Oct 2, 2025Updated 4 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆166Oct 11, 2024Updated last year
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆479Jan 3, 2026Updated last month
- ☆232Dec 3, 2025Updated 2 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,337Feb 19, 2026Updated last week
- A framework for the evaluation of autoregressive code generation language models.☆1,020Jul 22, 2025Updated 7 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆632Jul 29, 2025Updated 6 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆65Feb 3, 2025Updated last year
- A multi-programming language benchmark for LLMs☆298Jan 28, 2026Updated 3 weeks ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆187Aug 16, 2024Updated last year
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,045Feb 19, 2026Updated last week
- Arena-Hard-Auto: An automatic LLM benchmark.☆997Jun 21, 2025Updated 8 months ago
- Reproducing R1 for Code with Reliable Rewards☆288May 5, 2025Updated 9 months ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,137Jan 17, 2025Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆30Mar 5, 2024Updated last year
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆677Mar 16, 2025Updated 11 months ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,010Dec 22, 2024Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆323Feb 24, 2025Updated last year
- Democratizing Reinforcement Learning for LLMs☆5,135Feb 20, 2026Updated last week
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆174Aug 15, 2025Updated 6 months ago
- ☆1,098Jan 10, 2026Updated last month
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆478Feb 5, 2025Updated last year
- Scalable toolkit for efficient model alignment☆851Oct 6, 2025Updated 4 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆243Jul 13, 2025Updated 7 months ago
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated last week
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆97Apr 9, 2025Updated 10 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆1,614Jan 22, 2026Updated last month
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,339Updated this week
- Training and Benchmarking LLMs for Code Preference.☆38Nov 15, 2024Updated last year
- Benchmark ClassEval for class-level code generation.☆145Oct 24, 2024Updated last year
- A benchmark that challenges language models to code solutions for scientific problems☆173Updated this week
- ☆56May 28, 2024Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆62Oct 21, 2024Updated last year
- A series of technical report on Slow Thinking with LLM☆760Aug 13, 2025Updated 6 months ago
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆471Sep 30, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,628Updated this week
- EvoEval: Evolving Coding Benchmarks via LLM☆81Apr 6, 2024Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,311Feb 20, 2026Updated last week