evalplus / repoqaLinks
RepoQA: Evaluating Long-Context Code Understanding
☆109Updated 7 months ago
Alternatives and similar repositories for repoqa
Users that are interested in repoqa are comparing it to the libraries listed below
Sorting:
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆145Updated 8 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆73Updated last year
- Training and Benchmarking LLMs for Code Preference.☆33Updated 7 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆63Updated 8 months ago
- r2e: turn any github repository into a programming agent environment☆125Updated 2 months ago
- 🚀 SWE-bench Goes Live!☆80Updated this week
- ☆36Updated last month
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆76Updated 2 weeks ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆58Updated last year
- Replicating O1 inference-time scaling laws☆87Updated 6 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆83Updated 9 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆40Updated last week
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆33Updated 11 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆177Updated this week
- ☆47Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆42Updated 10 months ago
- ☆97Updated 11 months ago
- Scaling Data for SWE-agents☆256Updated this week
- ☆86Updated 2 weeks ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆65Updated 9 months ago
- Async pipelined version of Verl☆100Updated 2 months ago
- ☆64Updated last year
- ☆75Updated 3 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆48Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆167Updated 10 months ago
- ☆97Updated last month
- SWE Arena☆34Updated 2 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆183Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 3 months ago
- A Comprehensive Benchmark for Software Development.☆108Updated last year