tongye98 / Awesome-Code-BenchmarkLinks
A comprehensive code domain benchmark review of LLM researches.
β51Updated 2 weeks ago
Alternatives and similar repositories for Awesome-Code-Benchmark
Users that are interested in Awesome-Code-Benchmark are comparing it to the libraries listed below
Sorting:
- Must-read papers on Repository-level Code Generation & Issue Resolution π₯β122Updated this week
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Modelsβ101Updated 11 months ago
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)β55Updated 3 weeks ago
- Reproducing R1 for Code with Reliable Rewardsβ232Updated 2 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".β78Updated last year
- A banchmark list for evaluation of large language models.β130Updated 2 weeks ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalizationβ38Updated 4 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?β144Updated 7 months ago
- β97Updated 9 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ148Updated 9 months ago
- β234Updated 10 months ago
- Critique-out-Loud Reward Modelsβ67Updated 8 months ago
- A Comprehensive Benchmark for Software Development.β111Updated last year
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β168Updated 10 months ago
- Data and Code for Program of Thoughts [TMLR 2023]β279Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β145Updated 11 months ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.β533Updated 8 months ago
- β122Updated last month
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Followingβ127Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"β69Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.β55Updated 8 months ago
- EvoEval: Evolving Coding Benchmarks via LLMβ74Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Zihaβ¦β126Updated last year
- π Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Papβ¦β221Updated 2 months ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agentsβ112Updated last week
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witβ¦β132Updated last year
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlaβ¦β45Updated 2 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositoriesβ61Updated 10 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".β80Updated 6 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learningβ226Updated 2 months ago