domaineval / DomainEvalLinks
DOMAINEVAL is an auto-constructed benchmark for multi-domain code generation that consists of 2k+ subjects (i.e., description, reference code and tests) covering six domains (i.e., Computation, Basic, Network, Cryptography, Visualization, System).
☆14Updated last year
Alternatives and similar repositories for DomainEval
Users that are interested in DomainEval are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆26Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆59Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆29Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Updated 3 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆30Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- Trending projects & awesome papers about data-centric llm studies.☆39Updated 8 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆53Updated 8 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆95Updated 9 months ago
- Code and data for "Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?" (ACL 2024)☆32Updated last year
- The official repository of the Omni-MATH benchmark.☆93Updated last year
- ☆30Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- [ICML2024]Adaptive decoding balances the diversity and coherence of open-ended text generation.☆19Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆44Updated last year
- ☆22Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆82Updated 2 years ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆104Updated 4 months ago
- ☆16Updated last year
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆51Updated last year
- ☆145Updated 4 months ago
- ☆58Updated last year
- ☆56Updated last year
- instruction-following benchmark for large reasoning models☆44Updated 6 months ago