FSoft-AI4Code / CodeMMLULinks
[ICLR 2025] 🚀 CodeMMLU Evaluator: A framework for evaluating LM models on CodeMMLU MCQs benchmark.
☆28Updated 8 months ago
Alternatives and similar repositories for CodeMMLU
Users that are interested in CodeMMLU are comparing it to the libraries listed below
Sorting:
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆125Updated last year
- [EMNLP 2023] The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation☆102Updated last year
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆90Updated 2 years ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- ☆107Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆48Updated last month
- ☆41Updated last year
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆69Updated last year
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆73Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 7 months ago
- ☆41Updated 8 months ago
- ☆49Updated 8 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆30Updated last year
- ☆112Updated last year
- Code for the paper "Fishing for Magikarp"☆176Updated 7 months ago
- ☆29Updated last week
- ☆40Updated 7 months ago
- Evaluating LLMs with fewer examples☆170Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆61Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Updated last year
- ☆125Updated 9 months ago
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆80Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆118Updated last year
- ☆75Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆136Updated last year