SparksofAGI / MHPPLinks
☆32Updated 2 months ago
Alternatives and similar repositories for MHPP
Users that are interested in MHPP are comparing it to the libraries listed below
Sorting:
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆152Updated 10 months ago
- ☆52Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 10 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆34Updated last year
- Training and Benchmarking LLMs for Code Preference.☆35Updated 9 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆68Updated 11 months ago
- ☆28Updated last week
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated 11 months ago
- Reproducing R1 for Code with Reliable Rewards☆251Updated 3 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆108Updated 3 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆114Updated 9 months ago
- ☆71Updated this week
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆86Updated 4 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 5 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated 10 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆106Updated 6 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆164Updated last month
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆52Updated 6 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆112Updated 8 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆51Updated 9 months ago
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆26Updated 2 months ago
- NaturalCodeBench (Findings of ACL 2024)☆67Updated 10 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆80Updated last year
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆145Updated last month
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆60Updated last year
- 🚀 SWE-bench Goes Live!☆112Updated 3 weeks ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆185Updated 10 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆153Updated last week
- Async pipelined version of Verl☆113Updated 4 months ago
- ☆45Updated this week