EchoseChen / SPA-VL-RLHF
The reinforcement learning codes for dataset SPA-VL
☆28Updated 7 months ago
Alternatives and similar repositories for SPA-VL-RLHF:
Users that are interested in SPA-VL-RLHF are comparing it to the libraries listed below
- Accepted by ECCV 2024☆99Updated 4 months ago
- ☆29Updated 4 months ago
- ☆21Updated 3 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆75Updated last year
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆15Updated 3 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated 11 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆125Updated 2 months ago
- ☆39Updated 2 weeks ago
- ☆41Updated 8 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆84Updated 5 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆67Updated 4 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆55Updated last month
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆89Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆161Updated last year
- S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models☆52Updated this week
- ☆27Updated 2 months ago
- A Survey on the Honesty of Large Language Models☆53Updated 2 months ago
- Submission Guide + Discussion Board for AI Singapore Global Challenge for Safe and Secure LLMs (Track 1A).☆16Updated 7 months ago
- [FCS'24] LVLM Safety paper☆17Updated last month
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆19Updated 7 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆147Updated 10 months ago
- ☆61Updated 8 months ago
- A survey on harmful fine-tuning attack for large language model☆135Updated this week
- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models☆26Updated last year
- Accepted by IJCAI-24 Survey Track☆190Updated 5 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆71Updated 7 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆60Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆70Updated 11 months ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆38Updated 8 months ago