wang2226 / FOLK
[EMNLP 2023] Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
☆22Updated last year
Alternatives and similar repositories for FOLK:
Users that are interested in FOLK are comparing it to the libraries listed below
- Code for the ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆51Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆105Updated 5 months ago
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆49Updated last week
- Codes for ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆29Updated last year
- Code and data for "The Power of Noise: Redefining Retrieval for RAG Systems"☆49Updated 4 months ago
- ☆55Updated 3 months ago
- ☆16Updated last year
- ☆38Updated last year
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆97Updated 4 months ago
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆55Updated 4 months ago
- [ACL 2024]Controlled Text Generation for Large Language Model with Dynamic Attribute Graphs☆37Updated 5 months ago
- ☆36Updated 3 months ago
- ☆24Updated last year
- ☆65Updated last year
- ☆19Updated last year
- Implementation of "ACL'24: When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation"☆22Updated 7 months ago
- ☆18Updated last year
- Answering Ambiguous Questions via Iterative Prompting☆14Updated 9 months ago
- [APSIPA ASC 2023] The official code of paper, "FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Au…☆16Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆105Updated 5 months ago
- ☆17Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆33Updated 6 months ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- ☆24Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆54Updated 10 months ago
- Source code of our paper MIND, ACL 2024 Long Paper☆37Updated 9 months ago
- Enhancing contextual understanding in large language models through contrastive decoding☆15Updated 10 months ago
- ☆24Updated 2 years ago