jxzhangjhu / Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
β708Updated 7 months ago
Alternatives and similar repositories for Awesome-LLM-Uncertainty-Reliability-Robustness:
Users that are interested in Awesome-LLM-Uncertainty-Reliability-Robustness are comparing it to the libraries listed below
- Must-read Papers on Knowledge Editing for Large Language Models.β1,005Updated last month
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.β495Updated 3 months ago
- Paper List for In-context Learning π·β835Updated 4 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Sirenβs Song in the AI Ocean: A Survey on Hallucination in Large β¦β977Updated 2 months ago
- This repository contains a collection of papers and resources on Reasoning in Large Language Models.β554Updated last year
- [ACL 2023] Reasoning with Language Model Prompting: A Surveyβ933Updated last month
- List of papers on hallucination detection in LLMs.β765Updated last month
- Representation Engineering: A Top-Down Approach to AI Transparencyβ787Updated 6 months ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).β760Updated last year
- Aligning Large Language Models with Human: A Surveyβ715Updated last year
- LLM hallucination paper listβ302Updated 11 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"β462Updated 3 weeks ago
- A resource repository for machine unlearning in large language modelsβ307Updated last week
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Modelβ496Updated 2 weeks ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).β277Updated 10 months ago
- Papers and Datasets on Instruction Tuning and Following. β¨β¨β¨β481Updated 10 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..β200Updated 3 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.β435Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).β798Updated this week
- Continual Learning of Large Language Models: A Comprehensive Surveyβ339Updated 2 weeks ago
- β153Updated 7 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.β546Updated last year
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future