Yangyi-Chen / PaperList-Trustworthy-Applications
Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, calibration, backdoor learning, robustness, et al.
☆20Updated last year
Alternatives and similar repositories for PaperList-Trustworthy-Applications:
Users that are interested in PaperList-Trustworthy-Applications are comparing it to the libraries listed below
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 8 months ago
- ☆46Updated 7 months ago
- ☆25Updated 5 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆57Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆27Updated 10 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆35Updated last week
- ☆30Updated 9 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆84Updated 5 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 7 months ago
- ☆21Updated 7 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated 11 months ago
- ☆30Updated 4 months ago
- Augmenting Statistical Models with Natural Language Parameters☆23Updated 5 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- ☆25Updated last year
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆27Updated 6 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆69Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 4 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆71Updated 7 months ago
- ☆41Updated 2 weeks ago
- ☆37Updated last year
- ☆15Updated 8 months ago
- ☆20Updated 7 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆62Updated 3 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆46Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆56Updated last month
- ☆37Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆79Updated 5 months ago
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆27Updated 2 years ago