JacksonWuxs / UsableXAI_LLMLinks
Using Explanations as a Tool for Advanced LLMs
☆67Updated last year
Alternatives and similar repositories for UsableXAI_LLM
Users that are interested in UsableXAI_LLM are comparing it to the libraries listed below
Sorting:
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆54Updated 5 months ago
- ☆153Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆132Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆157Updated 7 months ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (paragraph-length experiments).☆69Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated last month
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆55Updated last year
- A curated list of resources for activation engineering☆102Updated 3 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆128Updated 2 months ago
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆102Updated last year
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆25Updated last month
- LoFiT: Localized Fine-tuning on LLM Representations☆41Updated 8 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆81Updated 11 months ago
- This repo contains code for paper: "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach".☆21Updated 11 months ago
- ☆55Updated 2 years ago
- ☆36Updated last year
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆76Updated 10 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- Data and code for the Corr2Cause paper (ICLR 2024)☆111Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆111Updated 6 months ago
- [ICLR'25] DataGen: Unified Synthetic Dataset Generation via Large Language Models☆64Updated 6 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆36Updated 3 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆64Updated 6 months ago
- ☆62Updated 6 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆43Updated 10 months ago
- ☆41Updated 11 months ago
- NeurIPS'24 - LLM Safety Landscape☆29Updated 6 months ago
- LLM Unlearning☆174Updated last year