OATML / semantic-entropy-probesLinks
☆46Updated last year
Alternatives and similar repositories for semantic-entropy-probes
Users that are interested in semantic-entropy-probes are comparing it to the libraries listed below
Sorting:
- ☆57Updated 2 years ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆66Updated 11 months ago
- ☆102Updated last year
- ☆52Updated 7 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆140Updated 4 months ago
- [ICLR 2025] General-purpose activation steering library☆116Updated last month
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆136Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated 10 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆83Updated 8 months ago
- ☆101Updated 2 years ago
- ☆180Updated last year
- ☆40Updated last year
- AI Logging for Interpretability and Explainability🔬☆133Updated last year
- ☆29Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆183Updated 6 months ago
- ☆63Updated 8 months ago
- Steering Llama 2 with Contrastive Activation Addition☆192Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆40Updated last year
- ☆92Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆43Updated 9 months ago
- ☆51Updated 2 years ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 8 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆111Updated 3 months ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (paragraph-length experiments).☆74Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆61Updated 7 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- ☆46Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆127Updated 11 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆121Updated last year