BRZ911 / Wrong-of-ThoughtLinks
[EMNLP 2024 Findings] Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information
☆13Updated last year
Alternatives and similar repositories for Wrong-of-Thought
Users that are interested in Wrong-of-Thought are comparing it to the libraries listed below
Sorting:
- The reinforcement learning codes for dataset SPA-VL☆42Updated last year
- ☆55Updated last year
- Latest Advances on Modality Priors in Multimodal Large Language Models☆29Updated 2 weeks ago
- ☆111Updated 3 months ago
- ☆178Updated last year
- Accepted by ECCV 2024☆179Updated last year
- ☆56Updated 6 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆173Updated 2 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆83Updated 4 months ago
- ☆155Updated last month
- 关于LLM和Multimodal LLM的paper list☆50Updated last week
- ☆37Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆78Updated last week
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆92Updated last year
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)☆20Updated 8 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆86Updated 10 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆169Updated 9 months ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆52Updated 5 months ago
- A survey on harmful fine-tuning attack for large language model☆227Updated last month
- ☆44Updated 6 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆140Updated 4 months ago
- This is the repository of DEER, a Dynamic Early Exit in Reasoning method for Large Reasoning Language Models.☆175Updated 5 months ago
- ☆294Updated 5 months ago
- Code for ICLR 2025 Paper "GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment"☆20Updated 10 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆157Updated 7 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆254Updated 4 months ago
- Accepted by IJCAI-24 Survey Track☆225Updated last year
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆32Updated 8 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆71Updated 8 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆32Updated 7 months ago