csitfun / GLoRE
a benckmark for evaluating logical reasoning of LLMs
☆16Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for GLoRE
- ☆36Updated 10 months ago
- Repo for ACL2023 paper "Plug-and-Play Knowledge Injection for Pre-trained Language Models"☆57Updated 7 months ago
- Do Large Language Models Know What They Don’t Know?☆85Updated this week
- [ICLR24] The open-source repo of THU-KEG's KoLA benchmark.☆50Updated last year
- EMNLP 2023 Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts☆23Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆50Updated 6 months ago
- ☆82Updated last year
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆61Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆81Updated last month
- Merging Generated and Retrieved Knowledge for Open-Domain QA (EMNLP 2023)☆22Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆59Updated 6 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆35Updated last year
- Paper list and datasets for the paper: A Survey on Data Selection for LLM Instruction Tuning☆32Updated 9 months ago
- A framework for editing the CoTs for better factuality☆41Updated 11 months ago
- [EMNLP 2024] A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models.☆13Updated last month
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆77Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆70Updated 9 months ago
- ☆59Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆29Updated 2 months ago
- ☆63Updated 5 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆84Updated 4 months ago
- ☆37Updated 6 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions