FloridSleeves / LLMDebuggerLinks
LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)
☆576Updated last year
Alternatives and similar repositories for LLMDebugger
Users that are interested in LLMDebugger are comparing it to the libraries listed below
Sorting:
- AgentCoder: multi-agent code generation framework.☆376Updated 2 months ago
- ☆626Updated 5 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆817Updated last year
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,006Updated last year
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆675Updated 10 months ago
- End-to-end Generative Optimization for AI Agents☆708Updated 2 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆246Updated last week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆627Updated 6 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆250Updated 10 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆551Updated this week
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆477Updated last month
- ☆671Updated last year
- MapCoder: Multi-Agent Code Generation for Competitive Problem Solving☆185Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆186Updated last year
- ☆104Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆430Updated last week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆796Updated 6 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆140Updated 9 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆323Updated 11 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆240Updated last year
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆479Updated last year
- Run evaluation on LLMs using human-eval benchmark☆427Updated 2 years ago
- ☆132Updated 8 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆602Updated 5 months ago
- ☆641Updated 3 months ago
- ☆159Updated last year
- A framework for the evaluation of autoregressive code generation language models.☆1,020Updated 6 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆171Updated 5 months ago
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,135Updated 3 months ago
- AWM: Agent Workflow Memory☆395Updated last month