SparksofAGI / MHPPLinks
☆32Updated last month
Alternatives and similar repositories for MHPP
Users that are interested in MHPP are comparing it to the libraries listed below
Sorting:
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆154Updated last year
- ☆53Updated last year
- ☆28Updated last week
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆89Updated last month
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆119Updated 11 months ago
- Training and Benchmarking LLMs for Code Preference.☆36Updated 11 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆132Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆114Updated 5 months ago
- Reproducing R1 for Code with Reliable Rewards☆259Updated 5 months ago
- NaturalCodeBench (Findings of ACL 2024)☆67Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated last year
- ☆73Updated 7 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 11 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆71Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆108Updated 8 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆105Updated 7 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆55Updated 8 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆159Updated 2 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆52Updated 11 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆176Updated 3 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆115Updated 10 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆185Updated last year
- ☆44Updated 10 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆174Updated 5 months ago
- ☆47Updated 2 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated last year
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆30Updated 2 months ago