Aider-AI / refactor-benchmarkLinks
Aider's refactoring benchmark exercises based on popular python repos
☆78Updated last year
Alternatives and similar repositories for refactor-benchmark
Users that are interested in refactor-benchmark are comparing it to the libraries listed below
Sorting:
- Harness used to benchmark aider against SWE Bench benchmarks☆79Updated last year
- Coding problems used in aider's polyglot benchmark☆199Updated last year
- proof-of-concept of Cursor's Instant Apply feature☆88Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆99Updated 4 months ago
- A system that tries to resolve all issues on a github repo with OpenHands.☆117Updated last year
- Simple Graph Memory for AI applications☆90Updated 8 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆90Updated 4 months ago
- ☆159Updated last year
- Convert a web page to markdown☆80Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆92Updated last year
- Agent computer interface for AI software engineer.☆115Updated last month
- ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive multi-agent systems. (EMNLP 2024 Demo)☆90Updated last month
- DSPy program/pipeline inspector widget for Jupyter/VSCode Notebooks.☆44Updated last year
- ☆166Updated 5 months ago
- Contains the prompts we use to talk to various LLMs for different utilities inside the editor☆84Updated 2 years ago
- A framework for evaluating function calls made by LLMs☆40Updated last year
- Leveraging DSPy for AI-driven task understanding and solution generation, the Self-Discover Framework automates problem-solving through r…☆73Updated 3 months ago
- Anthropic Computer Use with Modal Sandboxes☆43Updated last year
- ☆74Updated 2 years ago
- A Ruby on Rails style framework for the DSPy (Demonstrate, Search, Predict) project for Language Models like GPT, BERT, and LLama.☆132Updated last year
- Simple examples using Argilla tools to build AI☆57Updated last year
- Function Calling Benchmark & Testing☆92Updated last year
- A Python library to orchestrate LLMs in a neural network-inspired structure☆52Updated last year
- Replace expensive LLM calls with finetunes automatically☆66Updated last year
- ☆24Updated last year
- Small, simple agent task environments for training and evaluation☆19Updated last year
- ☆50Updated last year
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆185Updated 8 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆424Updated last week
- Verbosity control for AI agents☆66Updated last year