MCEVAL / McEvalLinks
☆46Updated last year
Alternatives and similar repositories for McEval
Users that are interested in McEval are comparing it to the libraries listed below
Sorting:
- NaturalCodeBench (Findings of ACL 2024)☆69Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆166Updated 4 months ago
- ☆33Updated 3 months ago
- ☆56Updated last year
- Heuristic filtering framework for RefineCode☆82Updated 9 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year
- Collection of papers for scalable automated alignment.☆94Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆118Updated 6 months ago
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆46Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- Repository of LV-Eval Benchmark☆73Updated last year
- LeetCode Training and Evaluation Dataset☆45Updated 8 months ago
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆108Updated 7 months ago
- Reproducing R1 for Code with Reliable Rewards☆278Updated 8 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆138Updated 8 months ago
- ☆109Updated 5 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆193Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆184Updated 6 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆74Updated last year
- Fantastic Data Engineering for Large Language Models☆93Updated last year
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆46Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆85Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- A Comprehensive Benchmark for Software Development.☆124Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆45Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆181Updated 10 months ago
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- Code for the curation of The Stack v2 and StarCoder2 training data☆124Updated last year