BRZ911 / Wrong-of-Thought
Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information (WoT)
☆13Updated 5 months ago
Alternatives and similar repositories for Wrong-of-Thought:
Users that are interested in Wrong-of-Thought are comparing it to the libraries listed below
- ☆99Updated 6 months ago
- The reinforcement learning codes for dataset SPA-VL☆31Updated 8 months ago
- Accepted by ECCV 2024☆109Updated 4 months ago
- ☆45Updated 3 months ago
- ☆42Updated 9 months ago
- A survey on harmful fine-tuning attack for large language model☆147Updated last week
- ☆64Updated 9 months ago
- ☆49Updated 5 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆132Updated this week
- Accepted by IJCAI-24 Survey Track☆196Updated 6 months ago
- Source code of our paper MIND, ACL 2024 Long Paper☆37Updated 9 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆52Updated 8 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆132Updated 2 weeks ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆26Updated 3 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆102Updated 4 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆101Updated 2 weeks ago
- ☆42Updated 2 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆81Updated 3 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆115Updated 4 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆63Updated 7 months ago
- Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆33Updated this week
- ☆30Updated 5 months ago
- ☆77Updated last month
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆41Updated 9 months ago