NaturalCodeBench (Findings of ACL 2024)
☆68Oct 14, 2024Updated last year
Alternatives and similar repositories for NaturalCodeBench
Users that are interested in NaturalCodeBench are comparing it to the libraries listed below
Sorting:
- ☆83Apr 18, 2024Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆168Oct 11, 2024Updated last year
- Official repository for the paper "COAST: Enhancing the Code Debugging Ability of LLMs through Communicative Agent Based Data Synthesis".☆18Feb 19, 2025Updated last year
- ☆56May 28, 2024Updated last year
- ☆307Aug 18, 2025Updated 7 months ago
- Extensive Self-Contrast Enables Feedback-Free Language Model Alignment☆21Apr 2, 2024Updated last year
- ☆13Mar 5, 2025Updated last year
- ☆10Nov 14, 2024Updated last year
- ☆22Jul 16, 2024Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Dec 22, 2023Updated 2 years ago
- ☆46Jun 11, 2025Updated 9 months ago
- A collection of practical code generation tasks and tests from open source projects. Complementary to HumanEval by OpenAI.☆24Jan 28, 2023Updated 3 years ago
- Reproducing R1 for Code with Reliable Rewards☆12Apr 9, 2025Updated 11 months ago
- [ACL 2025] Graph Aligned Large Language Models for Improved Source Code Understanding☆43May 18, 2025Updated 10 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆175Aug 15, 2025Updated 7 months ago
- ☆12Mar 18, 2024Updated 2 years ago
- This repository presents the original implementation of Pretraining Data Detection for Large Language Models: A Divergence-based Calibrat…☆22May 21, 2025Updated 10 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆68Aug 15, 2024Updated last year
- Arxiv地址:https://arxiv.org/abs/2409.01944☆22Feb 20, 2025Updated last year
- Code for our TKDE paper "Understanding WeChat User Preferences and “Wow” Diffusion"☆21Aug 29, 2024Updated last year
- ☆16Nov 26, 2024Updated last year
- A collection of papers tackling automatic fact-checking (particularly of AI-generated content)☆14Nov 3, 2023Updated 2 years ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Sep 17, 2024Updated last year
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆85Nov 4, 2023Updated 2 years ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆513Jun 6, 2025Updated 9 months ago
- ☆45Dec 12, 2024Updated last year
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆56May 22, 2025Updated 9 months ago
- ☆21Jul 24, 2025Updated 7 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆267Oct 30, 2024Updated last year
- Scaling Agentic Reinforcement Learning with a Multi-Turn, Multi-Task Framework☆249Jan 17, 2026Updated 2 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆192Aug 16, 2024Updated last year
- ☆50Sep 6, 2023Updated 2 years ago
- ☆159Aug 27, 2024Updated last year
- Collection of papers for scalable automated alignment.☆93Oct 22, 2024Updated last year
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Oct 11, 2024Updated last year
- ☆10Oct 28, 2019Updated 6 years ago
- Reproducing R1 for Code with Reliable Rewards☆297May 5, 2025Updated 10 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆488Jan 3, 2026Updated 2 months ago
- ☆11Jan 3, 2021Updated 5 years ago