nimasteryang / LingxiLinks
☆21Updated 3 weeks ago
Alternatives and similar repositories for Lingxi
Users that are interested in Lingxi are comparing it to the libraries listed below
Sorting:
- OrcaLoca: An LLM Agent Framework for Software Issue Localization [ICML 25]☆19Updated last month
- Agentless Lite: RAG-based SWE-Bench software engineering scaffold☆29Updated last month
- Harness used to benchmark aider against SWE Bench benchmarks☆72Updated 11 months ago
- ☆63Updated 2 weeks ago
- ☆92Updated 3 weeks ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆85Updated 2 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆179Updated 2 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆164Updated 9 months ago
- ☆94Updated 10 months ago
- ☆157Updated 9 months ago
- Source code for paper: INTERVENOR : Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing☆26Updated 6 months ago
- Advancing LLM with Diverse Coding Capabilities☆72Updated 10 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆178Updated last week
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆181Updated this week
- Scaling Data for SWE-agents☆220Updated this week
- ☆83Updated last month
- Enhanced fork of SWE-bench, tailored for OpenDevin's ecosystem.☆25Updated last year
- Reasoning by Communicating with Agents☆28Updated last month
- A system that tries to resolve all issues on a github repo with OpenHands.☆109Updated 6 months ago
- ☆41Updated 5 months ago
- Run SWE-bench evaluations remotely☆17Updated 2 weeks ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆214Updated this week
- From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging☆71Updated last week
- Beating the GAIA benchmark with Transformers Agents. 🚀☆121Updated 3 months ago
- Agent computer interface for AI software engineer.☆80Updated this week
- The official repo for the code and data of paper SMART☆26Updated 3 months ago
- The evaluation benchmark on MCP servers☆115Updated 2 weeks ago
- RepoQA: Evaluating Long-Context Code Understanding☆108Updated 7 months ago
- NaturalCodeBench (Findings of ACL 2024)☆65Updated 7 months ago
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆21Updated 2 months ago