microsoft / coderec_programming_statesLinks
Code and Data for: Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming
☆33Updated last year
Alternatives and similar repositories for coderec_programming_states
Users that are interested in coderec_programming_states are comparing it to the libraries listed below
Sorting:
- ☆127Updated 2 years ago
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆55Updated last year
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆64Updated this week
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Updated 3 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆80Updated last year
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆48Updated last month
- ☆78Updated last year
- ☆112Updated last year
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)☆25Updated 3 months ago
- Training language models to make programs faster☆97Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆182Updated last year
- CodeBERTScore: an automatic metric for code generation, based on BERTScore☆206Updated last year
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆80Updated last year
- Code for "The Whole Truth and Nothing But the Truth: Faithful and Controllable Dialogue Response Generation with Dataflow Transduction an…☆11Updated last year
- CodeMind is a generic framework for evaluating inductive code reasoning of LLMs. It is equipped with a static analysis component that ena…☆42Updated last month
- This is the artifact for paper “Are Machine Learning Cloud APIs Used Correctly? (#421)” in ICSE2021☆16Updated 4 years ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆138Updated 8 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆164Updated 4 months ago
- Data and Code for Reproducing "Global Relational Models of Source Code"☆84Updated 4 years ago
- Dataset with coverage annotations for HumanEval dataset☆24Updated 2 years ago
- ☆45Updated 5 months ago
- Incremental Python parser for constrained generation of code by LLMs.☆18Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- Code for "StructCoder: Structure-Aware Transformer for Code Generation"☆77Updated last year
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆79Updated last year
- ☆20Updated 2 years ago
- ☆36Updated 2 years ago
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆251Updated 2 years ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆228Updated this week