infi-coder / infibench-evaluatorLinks
The evaluation framework for the InfiCoder-Eval benchmark.
☆21Updated last year
Alternatives and similar repositories for infibench-evaluator
Users that are interested in infibench-evaluator are comparing it to the libraries listed below
Sorting:
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆165Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- ☆131Updated 9 months ago
- ☆33Updated last week
- ☆56Updated last year
- ☆33Updated 4 months ago
- ☆28Updated 3 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆74Updated last year
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆140Updated 9 months ago
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- ☆44Updated 9 months ago
- ☆90Updated 3 months ago
- [ACL'25 Findings] Official repo for "HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation Task"☆37Updated 10 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆236Updated 6 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆81Updated last year
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆104Updated 4 months ago
- ☆123Updated 11 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 10 months ago
- BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions☆25Updated last year
- ☆112Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- Replicating O1 inference-time scaling laws☆93Updated last year
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆58Updated 6 months ago
- ☆132Updated 8 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Updated 11 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆111Updated 11 months ago