Intelligent-CAT-Lab / CodeMind
CodeMind is a generic framework for evaluating inductive code reasoning of LLMs. It is equipped with a static analysis component that enables in-depth analysis of the results.
☆34Updated 6 months ago
Alternatives and similar repositories for CodeMind:
Users that are interested in CodeMind are comparing it to the libraries listed below
- ☆63Updated last month
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆57Updated 4 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆102Updated 3 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆57Updated 5 months ago
- ☆74Updated last year
- EvoEval: Evolving Coding Benchmarks via LLM☆66Updated 10 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆41Updated 6 months ago
- ☆60Updated 9 months ago
- r2e: turn any github repository into a programming agent environment☆100Updated 2 weeks ago
- Incremental Python parser for constrained generation of code by LLMs.☆15Updated 5 months ago
- ☆28Updated 3 months ago
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆40Updated last year
- ☆121Updated last year
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆41Updated last month
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆125Updated 4 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated 9 months ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆92Updated 3 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆34Updated this week
- Small, simple agent task environments for training and evaluation☆18Updated 3 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆47Updated 3 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆46Updated last year
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆72Updated 8 months ago
- ☆24Updated last month
- Data preparation code for CrystalCoder 7B LLM☆44Updated 9 months ago
- ☆29Updated last year
- ☆33Updated last year
- ☆22Updated 3 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆83Updated last year