zlin7 / UQ-NLGLinks
☆103Updated last year
Alternatives and similar repositories for UQ-NLG
Users that are interested in UQ-NLG are comparing it to the libraries listed below
Sorting:
- ☆180Updated last year
- ☆40Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated 11 months ago
- ☆57Updated 2 years ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆84Updated 9 months ago
- ☆102Updated 2 years ago
- ☆29Updated last year
- ☆46Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆137Updated last year
- ☆51Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆78Updated last year
- ☆49Updated last year
- ☆52Updated 8 months ago
- AI Logging for Interpretability and Explainability 🔬☆133Updated last year
- ☆51Updated 2 years ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆151Updated 5 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆31Updated 10 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆30Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆28Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆42Updated 6 months ago
- ☆79Updated 3 years ago
- [ICLR 2025] General-purpose activation steering library☆127Updated 2 months ago
- ☆25Updated 6 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 10 months ago
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆63Updated last year
- ☆33Updated last year
- Conformal Language Modeling☆32Updated last year