thu-coai / ComplexBench
Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)
☆70Updated 3 weeks ago
Alternatives and similar repositories for ComplexBench:
Users that are interested in ComplexBench are comparing it to the libraries listed below
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆97Updated 3 months ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆47Updated 10 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆66Updated 3 months ago
- ☆52Updated 6 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆46Updated 8 months ago
- Towards Systematic Measurement for Long Text Quality☆32Updated 6 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆75Updated 7 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆144Updated 6 months ago
- ☆96Updated 5 months ago
- ☆48Updated last year
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 4 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆73Updated 2 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 9 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆106Updated 5 months ago
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆38Updated 7 months ago
- ☆80Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆79Updated last year
- Collection of papers for scalable automated alignment.☆85Updated 4 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆120Updated 9 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆158Updated 8 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆111Updated 3 weeks ago
- ☆72Updated 9 months ago
- Do Large Language Models Know What They Don’t Know?☆92Updated 4 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- ☆15Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- The official repository of the Omni-MATH benchmark.☆74Updated 2 months ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- A Bilingual Role Evaluation Benchmark for Large Language Models☆39Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year