evalplus / repoqaLinks
RepoQA: Evaluating Long-Context Code Understanding
☆117Updated 11 months ago
Alternatives and similar repositories for repoqa
Users that are interested in repoqa are comparing it to the libraries listed below
Sorting:
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆153Updated 11 months ago
- Training and Benchmarking LLMs for Code Preference.☆36Updated 10 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- ☆38Updated 5 months ago
- ☆28Updated this week
- ☆53Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 10 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆131Updated 5 months ago
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆88Updated last week
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆70Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆316Updated 7 months ago
- SWE Arena☆34Updated 2 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated last year
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆166Updated 2 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆55Updated 2 months ago
- ☆32Updated 3 weeks ago
- ☆115Updated 4 months ago
- ☆118Updated 4 months ago
- Commit0: Library Generation from Scratch☆168Updated 4 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 4 months ago
- ☆78Updated 6 months ago
- ☆71Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆157Updated last month
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆171Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated 11 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆76Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆71Updated last year
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year