MCEVAL / McEvalLinks
☆43Updated 7 months ago
Alternatives and similar repositories for McEval
Users that are interested in McEval are comparing it to the libraries listed below
Sorting:
- NaturalCodeBench (Findings of ACL 2024)☆67Updated 9 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆84Updated 9 months ago
- Heuristic filtering framework for RefineCode☆66Updated 4 months ago
- Reproducing R1 for Code with Reliable Rewards☆232Updated 2 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆148Updated 9 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆145Updated 11 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- ☆31Updated 3 weeks ago
- 代码大模型 预训练&微调&DPO 数据处理 业界处理pipeline sota☆42Updated 11 months ago
- ☆48Updated last year
- Repository of LV-Eval Benchmark☆67Updated 10 months ago
- Counting-Stars (★)☆83Updated last month
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆67Updated 10 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆78Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆184Updated 9 months ago
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆103Updated last month
- ☆102Updated 9 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆38Updated 4 months ago
- Collection of papers for scalable automated alignment.☆92Updated 8 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆244Updated 8 months ago
- LeetCode Training and Evaluation Dataset☆25Updated 2 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆61Updated 9 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆168Updated 10 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆42Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆140Updated 2 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 8 months ago
- On Memorization of Large Language Models in Logical Reasoning☆69Updated 3 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆263Updated last year
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆68Updated 2 months ago
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year