Yangyi-Chen / PaperList-Trustworthy-Applications
Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, calibration, backdoor learning, robustness, et al.
☆21Updated last year
Alternatives and similar repositories for PaperList-Trustworthy-Applications:
Users that are interested in PaperList-Trustworthy-Applications are comparing it to the libraries listed below
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆25Updated 7 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- ☆37Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆92Updated 11 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 5 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆47Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 7 months ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- ☆29Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 9 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆28Updated 10 months ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆33Updated 6 months ago
- ☆21Updated last month
- ☆25Updated 2 years ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- ☆38Updated last year
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆38Updated 2 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆61Updated 3 months ago
- ☆18Updated last month
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆48Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆81Updated 7 months ago
- ☆17Updated 11 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 5 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆58Updated 7 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 8 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆89Updated 8 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆82Updated last year