microsoft / coverage-evalLinks
Dataset with coverage annotations for HumanEval dataset
☆24Updated 2 years ago
Alternatives and similar repositories for coverage-eval
Users that are interested in coverage-eval are comparing it to the libraries listed below
Sorting:
- ☆20Updated 2 years ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆66Updated 3 weeks ago
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆57Updated last year
- ☆127Updated 2 years ago
- Source Code Data Augmentation for Deep Learning: A Survey.☆66Updated last year
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆65Updated 3 years ago
- We introduce FixEval , a dataset for competitive programming bug fixing along with a comprehensive test suite and show the necessity of e…☆24Updated 3 years ago
- [EMNLP'22] Code for 'Exploring Representation-level Augmentation for Code Search'☆27Updated 2 years ago
- A collection of recent papers, benchmarks and datasets of AI4Code domain.☆58Updated last year
- Code and Data for: Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming☆33Updated last year
- ☆44Updated 6 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆80Updated last year
- ☆33Updated 11 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆74Updated last year
- Code implementation for CoTexT: Multi-task Learning with Code-Text Transformer☆36Updated 4 years ago
- code for "Implant Global and Local Hierarchy Information to Sequence based Code Representation Models"☆12Updated last year
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year
- Benchmark ClassEval for class-level code generation.☆145Updated last year
- ☆68Updated last year
- Reinforcement Learning for Repository-Level Code Completion☆42Updated last year
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆80Updated last year
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models☆34Updated 2 years ago
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- Replication package for ISSTA2023 paper - Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond☆23Updated 2 years ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆67Updated last year
- ESEC/FSE'21: Prediction-Preserving Program Simplification☆10Updated 3 years ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year
- ☆47Updated 3 years ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆79Updated last year
- JEMMA: An Extensible Java dataset for Many ML4Code Applications☆19Updated 3 years ago