MingyuJ666 / LVLM-Safety
[FCS'24] LVLM Safety paper
☆14Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for LVLM-Safety
- [EMNLP 2024 Findings] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models☆19Updated last week
- ☆31Updated 5 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆62Updated last month
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆71Updated 6 months ago
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆24Updated 3 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆106Updated last month
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆84Updated 5 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆63Updated last year
- Multilingual safety benchmark for Large Language Models☆24Updated 2 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆62Updated 8 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆99Updated 4 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆71Updated 2 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆75Updated last month
- [ATTRIB @ NeurIPS 2024] When Attention Sink Emerges in Language Models: An Empirical View☆29Updated last month
- Official code implementation of SKU, Accepted by ACL 2024 Findings☆11Updated 6 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆60Updated 8 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆39Updated last month
- ☆153Updated 11 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆31Updated 3 weeks ago
- Accepted by ECCV 2024☆74Updated last month
- Code and dataset for the paper: "Can Editing LLMs Inject Harm?"☆17Updated last week
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆61Updated last month
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆52Updated last week
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆48Updated 3 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆67Updated 9 months ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆79Updated 8 months ago
- ☆39Updated last month
- ☆53Updated 3 weeks ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆76Updated 3 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated last month