jlko / semantic_uncertaintyLinks
Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).
☆374Updated last year
Alternatives and similar repositories for semantic_uncertainty
Users that are interested in semantic_uncertainty are comparing it to the libraries listed below
Sorting:
- ☆179Updated last year
- LLM hallucination paper list☆323Updated last year
- Codebase for reproducing the experiments of the semantic uncertainty paper (paragraph-length experiments).☆70Updated last year
- ☆611Updated 2 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆517Updated last year
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆273Updated 7 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆520Updated 9 months ago
- A Survey of Attributions for Large Language Models☆216Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆190Updated 8 months ago
- A Survey on Data Selection for Language Models☆250Updated 5 months ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆394Updated 6 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆133Updated last year
- List of papers on hallucination detection in LLMs.☆974Updated last week
- ☆46Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆375Updated 11 months ago
- ☆447Updated 2 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 8 months ago
- A resource repository for representation engineering in large language models☆138Updated 11 months ago
- ☆37Updated 10 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆571Updated last year
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆102Updated last year
- ☆56Updated 2 years ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆552Updated 11 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆100Updated last month
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆205Updated 10 months ago
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models☆786Updated 5 months ago
- ☆102Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆139Updated last year
- awesome SAE papers☆51Updated 5 months ago
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆339Updated last year