Aider-AI / refactor-benchmarkLinks
Aider's refactoring benchmark exercises based on popular python repos
☆78Updated last year
Alternatives and similar repositories for refactor-benchmark
Users that are interested in refactor-benchmark are comparing it to the libraries listed below
Sorting:
- Harness used to benchmark aider against SWE Bench benchmarks☆78Updated last year
- Coding problems used in aider's polyglot benchmark☆194Updated 11 months ago
- proof-of-concept of Cursor's Instant Apply feature☆87Updated last year
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆91Updated 3 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆93Updated 2 months ago
- Simple Graph Memory for AI applications☆89Updated 6 months ago
- ☆159Updated last year
- Agent computer interface for AI software engineer.☆114Updated 2 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 10 months ago
- Leveraging DSPy for AI-driven task understanding and solution generation, the Self-Discover Framework automates problem-solving through r…☆72Updated last month
- ☆24Updated 10 months ago
- Simple examples using Argilla tools to build AI☆56Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆382Updated last week
- ⚡️🧪 Fast LLM Tool Calling Experimentation, big and smol☆152Updated last year
- ☆164Updated 4 months ago
- ☆90Updated 10 months ago
- ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive multi-agent systems. (EMNLP 2024 Demo)☆89Updated last week
- Function Calling Benchmark & Testing☆92Updated last year
- Run embedding models using ONNX☆35Updated last year
- ☆59Updated 10 months ago
- A framework for evaluating function calls made by LLMs☆39Updated last year
- ☆126Updated 6 months ago
- Convert a web page to markdown☆80Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 4 months ago
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆107Updated 4 months ago
- Writing Blog Posts with Generative Feedback Loops!☆50Updated last year
- A Python library to orchestrate LLMs in a neural network-inspired structure☆51Updated last year
- Agent fixing SWE bench issues☆19Updated last year
- Contains the prompts we use to talk to various LLMs for different utilities inside the editor☆83Updated last year
- Track the progress of LLM context utilisation☆55Updated 7 months ago