JacksonWuxs / UsableXAI_LLMLinks
Using Explanations as a Tool for Advanced LLMs
☆67Updated last year
Alternatives and similar repositories for UsableXAI_LLM
Users that are interested in UsableXAI_LLM are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 8 months ago
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆60Updated 6 months ago
- ☆154Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆133Updated last year
- ☆41Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆37Updated 2 months ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (paragraph-length experiments).☆73Updated last year
- [ICLR'25] DataGen: Unified Synthetic Dataset Generation via Large Language Models☆64Updated 7 months ago
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆103Updated 11 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆68Updated 7 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆121Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆99Updated last year
- ☆46Updated 8 months ago
- LLM Unlearning☆177Updated 2 years ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆91Updated 5 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆37Updated 5 months ago
- This repo contains code for paper: "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach".☆21Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 4 months ago
- ☆41Updated last year
- ☆102Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆163Updated 7 months ago
- A curated list of resources for activation engineering☆107Updated last month
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆85Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆81Updated 10 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆102Updated 2 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆60Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆127Updated 8 months ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆32Updated 10 months ago