FSoft-AI4Code / CodeFlowLinks
[FORGE 2025] Predicting Program Behavior with Dynamic Dependencies Learning
☆25Updated last year
Alternatives and similar repositories for CodeFlow
Users that are interested in CodeFlow are comparing it to the libraries listed below
Sorting:
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆69Updated last year
- [NAACL 2025] Benchmark for Repository-Level Code Generation, focus on Executability, Correctness from Test Cases and Usage of Contexts fr…☆38Updated 9 months ago
- [ACL 2024] Novel reranking method to select the best solutions for code generation☆16Updated last year
- [ICLR 2025] 🚀 CodeMMLU Evaluator: A framework for evaluating LM models on CodeMMLU MCQs benchmark.☆28Updated 8 months ago
- [EMNLP 2023] The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation☆102Updated last year
- Open-source Self-Instruction Tuning Code LLM☆171Updated 2 years ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆138Updated 8 months ago
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆80Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆125Updated last year
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆323Updated 9 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Updated 3 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆132Updated last year
- ☆120Updated last year
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆114Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 5 months ago
- Accepted by Transactions on Machine Learning Research (TMLR)☆136Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 7 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 10 months ago
- ☆41Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆242Updated 9 months ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆48Updated last month
- Language Model for Mainframe Modernization☆63Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆182Updated last year
- Source code for paper: INTERVENOR : Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing☆28Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆232Updated last year