MCEVAL / McEval
☆24Updated last month
Related projects ⓘ
Alternatives and complementary repositories for McEval
- NaturalCodeBench (Findings of ACL 2024)☆56Updated last month
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆57Updated 4 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆46Updated 2 months ago
- ☆50Updated 4 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆109Updated 5 months ago
- A Comprehensive Benchmark for Software Development.☆84Updated 5 months ago
- Generate the WizardCoder Instruct from the CodeAlpaca☆20Updated last year
- ☆118Updated 6 months ago
- ☆51Updated 3 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆28Updated last month
- Do Large Language Models Know What They Don’t Know?☆85Updated this week
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆46Updated 3 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆121Updated 3 months ago
- MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models☆16Updated 3 weeks ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆47Updated 2 weeks ago
- ☆71Updated 10 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆38Updated 4 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆65Updated last month
- 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training☆87Updated last month
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆74Updated last month
- ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆62Updated 7 months ago
- ☆88Updated last month
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated 11 months ago
- Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"☆87Updated 2 weeks ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆117Updated 11 months ago
- Paper list and datasets for the paper: A Survey on Data Selection for LLM Instruction Tuning☆32Updated 9 months ago
- ☆78Updated 6 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆141Updated 4 months ago
- trending projects & awesome papers about data-centric llm studies.☆32Updated last week
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year