FloridSleeves / LLMDebuggerLinks
LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step
☆533Updated 8 months ago
Alternatives and similar repositories for LLMDebugger
Users that are interested in LLMDebugger are comparing it to the libraries listed below
Sorting:
- This Repo is the official implementation of AgentCoder and AgentCoder+.☆327Updated 3 months ago
- ☆419Updated this week
- MapCoder: Multi-Agent Code Generation for Competitive Problem Solving☆147Updated 3 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆753Updated 10 months ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,699Updated 5 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆530Updated 2 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆178Updated this week
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆464Updated 3 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆179Updated 2 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆477Updated 3 weeks ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆307Updated 3 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆448Updated last week
- ☆597Updated 4 months ago
- AIDE: AI-Driven Exploration in the Space of Code. State of the Art machine Learning engineering agents that automates AI R&D.☆917Updated last month
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆372Updated last month
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆136Updated 6 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆164Updated 9 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆519Updated this week
- ☆157Updated 9 months ago
- Run evaluation on LLMs using human-eval benchmark☆413Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆728Updated 2 weeks ago
- End-to-end Generative Optimization for AI Agents☆586Updated this week
- A multi-programming language benchmark for LLMs☆249Updated 4 months ago
- Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding☆386Updated last year
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆635Updated 2 months ago
- AWM: Agent Workflow Memory☆271Updated 4 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆204Updated last week
- Beating the GAIA benchmark with Transformers Agents. 🚀☆120Updated 3 months ago
- Repo-Level Code generation papers☆178Updated 2 months ago
- Code for Quiet-STaR☆732Updated 9 months ago