symflower / eval-dev-qualityLinks
DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.
☆179Updated 2 months ago
Alternatives and similar repositories for eval-dev-quality
Users that are interested in eval-dev-quality are comparing it to the libraries listed below
Sorting:
- Simple examples using Argilla tools to build AI☆53Updated 8 months ago
- Tutorial for building LLM router☆221Updated last year
- 🤖 Headless IDE for AI agents☆196Updated 3 months ago
- A system that tries to resolve all issues on a github repo with OpenHands.☆110Updated 8 months ago
- Function Calling Benchmark & Testing☆88Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 6 months ago
- ☆115Updated 7 months ago
- Routing on Random Forest (RoRF)☆187Updated 10 months ago
- Contains the prompts we use to talk to various LLMs for different utilities inside the editor☆80Updated last year
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆53Updated 2 months ago
- ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive multi-agent systems. (EMNLP 2024 Demo)☆83Updated 4 months ago
- ☆166Updated 5 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆87Updated last month
- Harness used to benchmark aider against SWE Bench benchmarks☆72Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆87Updated 10 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆273Updated last week
- ☆157Updated last year
- A simple Python sandbox for helpful LLM data agents☆277Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆111Updated 3 months ago
- ☆102Updated last month
- Beating the GAIA benchmark with Transformers Agents. 🚀☆131Updated 5 months ago
- An automated tool for discovering insights from research papaer corpora☆138Updated last year
- LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores a…☆63Updated last week
- Leveraging DSPy for AI-driven task understanding and solution generation, the Self-Discover Framework automates problem-solving through r…☆67Updated last year
- ☆73Updated 5 months ago
- CursorCore: Assist Programming through Aligning Anything☆132Updated 5 months ago
- ☆222Updated last month
- ☆96Updated 10 months ago
- ☆66Updated last year