jinhaoduan / SARLinks
[ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models
☆53Updated 11 months ago
Alternatives and similar repositories for SAR
Users that are interested in SAR are comparing it to the libraries listed below
Sorting:
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆125Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 10 months ago
- ☆172Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆18Updated 4 months ago
- ☆41Updated 10 months ago
- LLM Unlearning☆172Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 10 months ago
- A curated list of resources for activation engineering☆99Updated 2 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 3 weeks ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆96Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆39Updated 6 months ago
- ☆29Updated last year
- ☆23Updated 4 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆41Updated 2 months ago
- ☆97Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆130Updated 10 months ago
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆51Updated 3 months ago
- ☆38Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆39Updated 8 months ago
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆102Updated 9 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆35Updated last year
- Toolkit for evaluating the trustworthiness of generative foundation models.