SparksofAGI / MHPPLinks
☆33Updated 4 months ago
Alternatives and similar repositories for MHPP
Users that are interested in MHPP are comparing it to the libraries listed below
Sorting:
- ☆56Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆164Updated last year
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆74Updated last year
- ☆32Updated this week
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 8 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Updated last year
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆104Updated 4 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆229Updated 6 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆69Updated last year
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆31Updated 5 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Updated 10 months ago
- ☆80Updated 10 months ago
- Reproducing R1 for Code with Reliable Rewards☆285Updated 8 months ago
- ☆56Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Updated last year
- Collection of papers for scalable automated alignment.☆93Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆170Updated 5 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆59Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- e☆43Updated 9 months ago
- A Comprehensive Benchmark for Software Development.☆127Updated last year
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆62Updated last year
- ☆51Updated last year