Md-Ashraful-Pramanik / MapCoder
MapCoder: Multi-Agent Code Generation for Competitive Problem Solving
☆111Updated last week
Alternatives and similar repositories for MapCoder:
Users that are interested in MapCoder are comparing it to the libraries listed below
- ☆153Updated 5 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆109Updated 3 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆133Updated last month
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆130Updated 6 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆144Updated 6 months ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆128Updated 9 months ago
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆110Updated 2 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆62Updated 5 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆281Updated 9 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆125Updated 4 months ago
- A Comprehensive Benchmark for Software Development.☆93Updated 8 months ago
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning☆207Updated last month
- This Repo is the official implementation of AgentCoder and AgentCoder+.☆287Updated this week
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆57Updated 5 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆103Updated last week
- ☆210Updated 6 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆62Updated 7 months ago
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆172Updated 5 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆233Updated 3 months ago
- ☆73Updated this week
- Code for paper "Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System"☆52Updated 3 months ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆71Updated 3 months ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆78Updated 11 months ago
- A banchmark list for evaluation of large language models.☆80Updated 7 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆174Updated 10 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆131Updated last month
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆47Updated 4 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆77Updated 5 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆62Updated 5 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆102Updated 3 months ago