domaineval / DomainEvalLinks
DOMAINEVAL is an auto-constructed benchmark for multi-domain code generation that consists of 2k+ subjects (i.e., description, reference code and tests) covering six domains (i.e., Computation, Basic, Network, Cryptography, Visualization, System).
☆14Updated last year
Alternatives and similar repositories for DomainEval
Users that are interested in DomainEval are comparing it to the libraries listed below
Sorting:
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Updated last year
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆36Updated 2 years ago
- ☆16Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆29Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Updated 3 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- instruction-following benchmark for large reasoning models☆44Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆20Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆59Updated last year
- Code and data for "MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models"☆51Updated 2 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- Pytorch implementation of Tree Preference Optimization (TPO) (Accepted by ICLR'25)☆26Updated 9 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- ☆31Updated last year
- ☆30Updated last year
- Trending projects & awesome papers about data-centric llm studies.☆39Updated 8 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆26Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆129Updated last year
- [EMNLP 2025] Verification Engineering for RL in Instruction Following☆50Updated last month
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆53Updated 8 months ago
- ☆41Updated 2 years ago
- A method of ensemble learning for heterogeneous large language models.☆64Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆109Updated last week
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- Reproducing R1 for Code with Reliable Rewards☆12Updated 9 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- [ICML2024]Adaptive decoding balances the diversity and coherence of open-ended text generation.☆19Updated last year