MCEVAL / McEvalLinks
☆44Updated 10 months ago
Alternatives and similar repositories for McEval
Users that are interested in McEval are comparing it to the libraries listed below
Sorting:
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆157Updated last month
- ☆32Updated 3 weeks ago
- NaturalCodeBench (Findings of ACL 2024)☆67Updated 11 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated last year
- Heuristic filtering framework for RefineCode☆76Updated 7 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆153Updated last year
- ☆53Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆82Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated last year
- Reproducing R1 for Code with Reliable Rewards☆257Updated 5 months ago
- LeetCode Training and Evaluation Dataset☆35Updated 5 months ago
- Collection of papers for scalable automated alignment.☆93Updated 11 months ago
- Generate the WizardCoder Instruct from the CodeAlpaca☆21Updated 2 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆38Updated 7 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated 11 months ago
- ☆11Updated 2 years ago
- A Comprehensive Benchmark for Software Development.☆114Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆114Updated 4 months ago
- Towards Systematic Measurement for Long Text Quality☆36Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆256Updated 11 months ago
- Repository of LV-Eval Benchmark☆70Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆184Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 5 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆71Updated last year
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆106Updated 4 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆119Updated last year
- Async pipelined version of Verl☆117Updated 6 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆178Updated 3 months ago