HillZhang1999 / llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Sirenβs Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
β1,013Updated 5 months ago
Alternatives and similar repositories for llm-hallucination-survey:
Users that are interested in llm-hallucination-survey are comparing it to the libraries listed below
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>β339Updated last year
- Paper List for In-context Learning π·β854Updated 6 months ago
- Aligning Large Language Models with Human: A Surveyβ728Updated last year
- [ACL 2023] Reasoning with Language Model Prompting: A Surveyβ952Updated 3 weeks ago
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Futureβ442Updated 3 months ago
- Must-read Papers on Knowledge Editing for Large Language Models.β1,076Updated last month
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.β519Updated 6 months ago
- LLM hallucination paper listβ315Updated last year
- Papers and Datasets on Instruction Tuning and Following. β¨β¨β¨β492Updated last year
- An Awesome Collection for LLM Surveyβ340Updated 3 weeks ago
- β539Updated last month
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningβ441Updated 6 months ago
- This is the repo for the survey of LLM4IR.β477Updated 8 months ago
- β900Updated 9 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"β1,157Updated last year
- List of papers on hallucination detection in LLMs.β855Updated 3 weeks ago
- β¨β¨Latest Papers and Benchmarks in Reasoning with Foundation Modelsβ571Updated last week
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).β768Updated last year
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".β1,518Updated last month
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]β553Updated 4 months ago
- Prod Envβ416Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"β482Updated 3 months ago
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.β621Updated this week
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other moβ¦β363Updated 8 months ago
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Modelsβ747Updated 2 months ago
- Resource, Evaluation and Detection Papers for ChatGPTβ455Updated last year
- Collection of training data management explorations for large language modelsβ322Updated 9 months ago
- papers related to LLM-agent that published on top conferencesβ315Updated 3 weeks ago
- β524Updated 4 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.β466Updated last year