LoveCatc / supervised-llm-uncertainty-estimationLinks
This repo contains code for paper: "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach".
☆24Updated last year
Alternatives and similar repositories for supervised-llm-uncertainty-estimation
Users that are interested in supervised-llm-uncertainty-estimation are comparing it to the libraries listed below
Sorting:
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆126Updated last year
- ☆57Updated 2 years ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆158Updated 6 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆123Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆160Updated last month
- ☆181Updated last year
- A Survey on Data Selection for Language Models☆253Updated 7 months ago
- Using Explanations as a Tool for Advanced LLMs☆68Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆107Updated last month
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆137Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆132Updated last year
- ☆52Updated 8 months ago
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆133Updated 5 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆69Updated 9 months ago
- ☆104Updated last year
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆54Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- Critique-out-Loud Reward Models☆70Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆105Updated last year
- Data and code for the Corr2Cause paper (ICLR 2024)☆111Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆115Updated 5 months ago
- ☆103Updated 2 years ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated last year
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆399Updated last year