BRZ911 / Wrong-of-ThoughtLinks
Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information (WoT)
☆13Updated 8 months ago
Alternatives and similar repositories for Wrong-of-Thought
Users that are interested in Wrong-of-Thought are comparing it to the libraries listed below
Sorting:
- ☆101Updated last month
- ☆33Updated last week
- Latest Advances on Modality Priors in Multimodal Large Language Models☆20Updated last month
- The reinforcement learning codes for dataset SPA-VL☆33Updated 11 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆41Updated 2 months ago
- ☆46Updated 6 months ago
- Accepted by ECCV 2024☆130Updated 7 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆70Updated 10 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆64Updated this week
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆221Updated this week
- ☆74Updated last year
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆107Updated 7 months ago
- Latest Advances on Long Chain-of-Thought Reasoning☆358Updated last week
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆107Updated this week
- ☆49Updated 11 months ago
- ☆33Updated 8 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆237Updated this week
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆90Updated 6 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆82Updated 5 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆132Updated 3 weeks ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆40Updated 3 weeks ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆59Updated 5 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆341Updated 9 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- ☆27Updated 7 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆58Updated 10 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆147Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆123Updated 2 months ago
- Awesome RL-based LLM Reasoning☆511Updated last month
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 2 months ago