lorenzkuhn / semantic_uncertainty
☆159Updated 9 months ago
Alternatives and similar repositories for semantic_uncertainty:
Users that are interested in semantic_uncertainty are comparing it to the libraries listed below
- ☆84Updated 8 months ago
- A Survey on Data Selection for Language Models☆216Updated 5 months ago
- ☆14Updated 3 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆105Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆106Updated 6 months ago
- ☆92Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆50Updated 3 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆97Updated 2 years ago
- Function Vectors in Large Language Models (ICLR 2024)☆144Updated 5 months ago
- ☆47Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated 11 months ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆141Updated last year
- ☆173Updated 7 months ago
- The repo for In-context Autoencoder☆114Updated 10 months ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆160Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆73Updated 2 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- A resource repository for representation engineering in large language models☆111Updated 4 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆53Updated 11 months ago
- ☆50Updated last year
- AI Logging for Interpretability and Explainability🔬☆107Updated 9 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆34Updated 2 months ago
- A Survey of Hallucination in Large Foundation Models☆54Updated last year
- ☆174Updated 2 years ago
- contrastive decoding☆196Updated 2 years ago
- ☆67Updated last year
- Codes for papers on Large Language Models Personalization (LaMP)☆147Updated last month
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆287Updated 11 months ago