vaibhavagg303 / DARS-AgentLinks
☆66Updated 5 months ago
Alternatives and similar repositories for DARS-Agent
Users that are interested in DARS-Agent are comparing it to the libraries listed below
Sorting:
- Agent computer interface for AI software engineer.☆111Updated last month
- Open Agent Computer Interface☆86Updated 10 months ago
- Harness used to benchmark aider against SWE Bench benchmarks☆75Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆331Updated last week
- Contains the prompts we use to talk to various LLMs for different utilities inside the editor☆83Updated last year
- Agentless Lite: RAG-based SWE-Bench software engineering scaffold☆44Updated 6 months ago
- Multi-Granularity LLM Debugger☆91Updated 3 months ago
- A system that tries to resolve all issues on a github repo with OpenHands.☆114Updated 11 months ago
- ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive multi-agent systems. (EMNLP 2024 Demo)☆88Updated last month
- ☆148Updated last month
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆429Updated last week
- [NAACL2025] LiteWebAgent: The Open-Source Suite for VLM-Based Web-Agent Applications☆125Updated 3 months ago
- 🚀 The LLM Automatic Computer Framework: L2MAC☆140Updated 9 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆216Updated 6 months ago
- ☆117Updated 4 months ago
- Coding problems used in aider's polyglot benchmark☆183Updated 9 months ago
- ☆101Updated last year
- Run SWE-bench evaluations remotely☆41Updated 2 months ago
- ☆170Updated 7 months ago
- This repository contains popular code generation frameworks such as MapCoder, CodeSIM.☆61Updated 3 months ago
- Aider's refactoring benchmark exercises based on popular python repos☆77Updated last year
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆218Updated this week
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 5 months ago
- ☆56Updated 3 months ago
- ☆160Updated last year
- ScreenSuite - The most comprehensive benchmarking suite for GUI Agents!☆126Updated 2 weeks ago
- A Python library to orchestrate LLMs in a neural network-inspired structure☆50Updated last year
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents☆127Updated last year
- ☆121Updated 5 months ago
- Code for the paper "Coding Agents with Multimodal Browsing are Generalist Problem Solvers"☆86Updated last week