FSoft-AI4Code / CodeMMLULinks
[ICLR 2025] π CodeMMLU Evaluator: A framework for evaluating LM models on CodeMMLU MCQs benchmark.
β23Updated 2 months ago
Alternatives and similar repositories for CodeMMLU
Users that are interested in CodeMMLU are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generationβ96Updated 10 months ago
- β36Updated last month
- β86Updated 7 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repositoryβ63Updated 9 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srwβ61Updated 8 months ago
- β119Updated last month
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"β45Updated 5 months ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agentsβ77Updated 2 weeks ago
- Training and Benchmarking LLMs for Code Preference.β33Updated 7 months ago
- π SWE-bench Goes Live!β80Updated this week
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methodsβ95Updated 3 weeks ago
- RepoQA: Evaluating Long-Context Code Understandingβ109Updated 7 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generationβ48Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curationβ55Updated last month
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.β90Updated last month
- Astraios: Parameter-Efficient Instruction Tuning Code Language Modelsβ58Updated last year
- Evaluating LLMs with fewer examplesβ158Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"β106Updated 8 months ago
- β28Updated last week
- β41Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMsβ89Updated 7 months ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"β37Updated 10 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"β38Updated last year
- β110Updated 11 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"β109Updated last year
- β31Updated this week
- [ACL'2025 Findings] Official repo for "HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation Taskβ¦β28Updated 2 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"β85Updated last year
- β183Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.β55Updated 8 months ago