BRZ911 / Wrong-of-ThoughtLinks
[EMNLP 2024 Findings] Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information
☆13Updated last year
Alternatives and similar repositories for Wrong-of-Thought
Users that are interested in Wrong-of-Thought are comparing it to the libraries listed below
Sorting:
- The reinforcement learning codes for dataset SPA-VL☆38Updated last year
- Latest Advances on Modality Priors in Multimodal Large Language Models☆24Updated 3 weeks ago
- ☆54Updated 4 months ago
- ☆108Updated last month
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆144Updated 2 months ago
- Latest Advances on Long Chain-of-Thought Reasoning☆523Updated 3 months ago
- ☆53Updated last year
- Accepted by ECCV 2024☆160Updated last year
- 关于LLM和Multimodal LLM的paper list☆48Updated 2 weeks ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆69Updated 6 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆247Updated 2 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆306Updated 2 weeks ago
- ☆165Updated last year
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆194Updated 2 weeks ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆61Updated 4 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆375Updated last year
- ☆38Updated 4 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆75Updated 7 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆97Updated 10 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆73Updated last month
- 【ACL 2024】 SALAD benchmark & MD-Judge☆162Updated 7 months ago
- This is the repository of DEER, a Dynamic Early Exit in Reasoning method for Large Reasoning Language Models.☆170Updated 3 months ago
- ☆275Updated 3 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆80Updated 10 months ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆51Updated 2 months ago
- Accepted by IJCAI-24 Survey Track☆218Updated last year
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆872Updated 3 weeks ago
- Official repository of RiOSWorld☆41Updated last week
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆170Updated last week
- Official repository for "CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation"☆34Updated last month