jszheng21 / RACELinks
RACE is a multi-dimensional benchmark for code generation that focuses on Readability, mAintainability, Correctness, and Efficiency.
☆11Updated last year
Alternatives and similar repositories for RACE
Users that are interested in RACE are comparing it to the libraries listed below
Sorting:
- [COLING25] CodeJudge Eval: Can Large Language Models be Good Judges in Code Understanding?☆13Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆53Updated 7 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆36Updated 6 months ago
- 🩺 A collection of ChatGPT evaluation reports on various bechmarks.☆50Updated 2 years ago
- ☆30Updated last year
- Align, a general text alignment function☆15Updated 2 years ago
- Evaluate the Quality of Critique☆36Updated last year
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆22Updated 10 months ago
- Bridging the Generalization Gap in Text-to-SQL Parsing with Schema Expansion☆14Updated 2 years ago
- ☆14Updated 2 years ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- ☆20Updated 9 months ago
- ☆32Updated this week
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- ☆35Updated last year
- [EMNLP 2025] Verification Engineering for RL in Instruction Following☆50Updated 3 weeks ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆59Updated last year
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆29Updated last year
- Evaluation on Logical Reasoning and Abstract Reasoning Challenges☆29Updated 9 months ago
- ☆46Updated 3 months ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Updated last year
- instruction-following benchmark for large reasoning models☆44Updated 5 months ago
- MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following☆16Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated 2 years ago
- ☆17Updated 10 months ago
- [EMNLP'24] LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆31Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆64Updated 2 years ago