MiaoXiong2320 / llm-uncertainty
code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"
☆114Updated last year
Alternatives and similar repositories for llm-uncertainty:
Users that are interested in llm-uncertainty are comparing it to the libraries listed below
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆49Updated 8 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆41Updated 3 weeks ago
- ☆29Updated 2 months ago
- Accepted LLM Papers in NeurIPS 2024☆36Updated 6 months ago
- ☆165Updated 10 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆29Updated 3 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆75Updated 4 months ago
- A curated list of resources for activation engineering☆67Updated last month
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆72Updated 6 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆109Updated last year
- ☆88Updated 10 months ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆21Updated 4 months ago
- ☆40Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆54Updated 5 months ago
- ☆95Updated last month
- ☆51Updated 3 weeks ago
- ☆93Updated last year
- ☆49Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated last month
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 6 months ago
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆41Updated 3 weeks ago
- ☆29Updated last year
- ☆41Updated this week
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆37Updated 3 months ago
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆48Updated 4 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆30Updated this week
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆24Updated 8 months ago