PKU-Baichuan-MLSystemLab / SysBenchLinks
SysBench: Can Large Language Models Follow System Messages?
☆37Updated last year
Alternatives and similar repositories for SysBench
Users that are interested in SysBench are comparing it to the libraries listed below
Sorting:
- ☆57Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated 2 years ago
- ☆33Updated last year
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆82Updated 2 years ago
- ☆17Updated 9 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆98Updated 10 months ago
- This is the repository for paper "CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models"☆29Updated 2 years ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Updated last year
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆50Updated last year
- Data and Code for EMNLP 2023 paper "QTSumm: Query-Focused Summarization over Tabular Data"☆22Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆117Updated 6 months ago
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆44Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- Do Large Language Models Know What They Don’t Know?☆102Updated last year
- ☆41Updated 2 years ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated 11 months ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆48Updated 2 years ago
- [COLM'24] "How Easily do Irrelevant Inputs Skew the Responses of Large Language Models?"☆22Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆88Updated 2 years ago
- Collection of papers for scalable automated alignment.☆94Updated last year
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆25Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- ☆70Updated last year
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆42Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- ☆87Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- ☆64Updated 3 years ago