evalplus / repoqa
RepoQA: Evaluating Long-Context Code Understanding
☆107Updated 5 months ago
Alternatives and similar repositories for repoqa:
Users that are interested in repoqa are comparing it to the libraries listed below
- Training and Benchmarking LLMs for Code Preference.☆33Updated 5 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆136Updated 6 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆68Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆59Updated 6 months ago
- ☆85Updated 2 months ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆61Updated 2 weeks ago
- Replicating O1 inference-time scaling laws☆83Updated 4 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆30Updated 9 months ago
- ☆75Updated last month
- SWE Arena☆33Updated last week
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated last year
- ☆31Updated last week
- ☆28Updated 5 months ago
- r2e: turn any github repository into a programming agent environment☆114Updated this week
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆47Updated last year
- ☆24Updated 5 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆94Updated last week
- ☆60Updated 11 months ago
- ☆44Updated 10 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆86Updated last year
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆64Updated 7 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆170Updated last month
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated 10 months ago
- ☆60Updated 11 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆167Updated 3 weeks ago
- ☆91Updated 9 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆11Updated 2 weeks ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆79Updated 7 months ago
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆49Updated 5 months ago
- ☆36Updated 10 months ago