☆185Jun 20, 2024Updated last year
Alternatives and similar repositories for semantic_uncertainty
Users that are interested in semantic_uncertainty are comparing it to the libraries listed below
Sorting:
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆407Apr 12, 2024Updated last year
- ☆105Jun 30, 2024Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆62Sep 4, 2024Updated last year
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models☆812May 21, 2025Updated 9 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆31Aug 15, 2024Updated last year
- ☆52Jul 31, 2024Updated last year
- This repo contains code for paper: "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach".☆24Oct 21, 2024Updated last year
- ☆444Updated this week
- Conformal Language Modeling☆31Dec 21, 2023Updated 2 years ago
- This repo contains the source code for reproducing the experimental results in semantic density paper (Neurips 2024)☆19Sep 28, 2025Updated 5 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆145Mar 14, 2024Updated last year
- Teaching Models to Express Their Uncertainty in Words☆39May 26, 2022Updated 3 years ago
- ☆13Jan 14, 2026Updated last month
- ☆30Updated this week
- Active Learning Helps Pretrained Models Learn the Intended Task (https://arxiv.org/abs/2204.08491) by Alex Tamkin, Dat Nguyen, Salil Desh…☆11Nov 22, 2022Updated 3 years ago
- ☆35May 30, 2022Updated 3 years ago
- ☆58Jun 30, 2023Updated 2 years ago
- ☆53Apr 9, 2025Updated 11 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆42Jan 18, 2026Updated last month
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆29Jun 4, 2024Updated last year
- The codes for ACL2022 paper “CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation☆23Oct 23, 2022Updated 3 years ago
- ✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Model…☆18Mar 13, 2025Updated 11 months ago
- ☆14Jul 24, 2024Updated last year
- ☆14Oct 28, 2023Updated 2 years ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆36Dec 17, 2024Updated last year
- Token-level Reference-free Hallucination Detection☆97Jul 25, 2023Updated 2 years ago
- Models, data, and codes for the paper: MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models☆24Sep 26, 2024Updated last year
- [NAACL 2022] This is the code repo for our paper `ACTUNE: Uncertainty-based Active Self-Training for Active Fine-Tuning of Pretrained Lan…☆15Nov 16, 2022Updated 3 years ago
- Uncertainty Quantification with Pre-trained Language Models: An Empirical Analysis☆15Oct 11, 2022Updated 3 years ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆417Apr 13, 2025Updated 10 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆137Mar 14, 2024Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆66Apr 11, 2025Updated 10 months ago
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- ☆42Feb 2, 2024Updated 2 years ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 10 months ago
- ☆24Dec 2, 2023Updated 2 years ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Oct 30, 2024Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 5 months ago