genglinliu / UnknownBenchLinks
Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge
☆14Updated last year
Alternatives and similar repositories for UnknownBench
Users that are interested in UnknownBench are comparing it to the libraries listed below
Sorting:
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆51Updated 2 years ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- ☆11Updated last year
- ☆25Updated last month
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 10 months ago
- ☆17Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆66Updated last year
- ☆74Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 11 months ago
- Code for ACL 2023 paper "A Close Look into the Calibration of Pre-trained Language Models"☆11Updated 2 years ago
- ☆86Updated 2 years ago
- Repository for the Bias Benchmark for QA dataset.☆123Updated last year
- ☆75Updated 6 months ago
- ☆26Updated 9 months ago
- ☆29Updated last year
- ☆11Updated 4 months ago
- ☆45Updated last year
- ☆41Updated 3 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Evaluation of the Cross-Lingual Knowledge Alignment in LLMs☆9Updated last year
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆79Updated 4 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆81Updated 10 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated last year
- ☆26Updated last year
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆25Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆41Updated 9 months ago