zhliu0106 / learning-to-refuseLinks
Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"
☆8Updated 6 months ago
Alternatives and similar repositories for learning-to-refuse
Users that are interested in learning-to-refuse are comparing it to the libraries listed below
Sorting:
- Implementation of AdaCQR(COLING 2025)☆10Updated 5 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 11 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 10 months ago
- Official Implementation of "Probing Language Models for Pre-training Data Detection"☆19Updated 6 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated 10 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆65Updated last year
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- SysBench: Can Large Language Models Follow System Messages?☆30Updated 9 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆16Updated last year
- ☆41Updated last year
- ☆30Updated 6 months ago
- ☆23Updated 10 months ago
- ☆74Updated last year
- ☆41Updated 8 months ago
- ☆54Updated 10 months ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆31Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated 11 months ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆14Updated last year
- ☆18Updated last year
- ☆44Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- ☆16Updated last week
- AbstainQA, ACL 2024☆26Updated 8 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆113Updated 9 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆81Updated last year
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Updated 9 months ago
- Merging Generated and Retrieved Knowledge for Open-Domain QA (EMNLP 2023)☆22Updated last year
- Implementation of "ACL'24: When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation"☆25Updated 11 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 7 months ago
- Code and data for the FACTOR paper☆47Updated last year