LoveCatc / supervised-llm-uncertainty-estimationLinks
This repo contains code for paper: "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach".
☆21Updated 10 months ago
Alternatives and similar repositories for supervised-llm-uncertainty-estimation
Users that are interested in supervised-llm-uncertainty-estimation are comparing it to the libraries listed below
Sorting:
- ☆172Updated last year
- ☆99Updated last year
- ☆43Updated last year
- Using Explanations as a Tool for Advanced LLMs☆67Updated 11 months ago
- ☆55Updated 2 years ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆131Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆110Updated 11 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆99Updated this week
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆125Updated 2 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆116Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- A Survey on Data Selection for Language Models☆247Updated 4 months ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆358Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆155Updated 6 months ago
- A resource repository for representation engineering in large language models☆132Updated 9 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆116Updated last year
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆51Updated 8 months ago
- ☆62Updated 5 months ago
- AI Logging for Interpretability and Explainability🔬☆124Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 9 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆116Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Critique-out-Loud Reward Models☆70Updated 10 months ago
- Steering Llama 2 with Contrastive Activation Addition☆178Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆185Updated 6 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆77Updated 5 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆40Updated 7 months ago
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆101Updated last year
- ☆96Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆111Updated 6 months ago