Wu-Zongyu / LanPLinks
Official Implementation of 'Lanp: Rethinking the Impact of Language Priors in Large Vision-Language Models'
☆10Updated 3 months ago
Alternatives and similar repositories for LanP
Users that are interested in LanP are comparing it to the libraries listed below
Sorting:
- A preview-version of one novel multimodal reasoning benchmark CharmBench.☆26Updated this week
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆65Updated this week
- ☆46Updated 6 months ago
- A paper list about large language models and multimodal models (Diffusion, VLM). From foundations to applications. It is only used to rec…☆57Updated 3 months ago
- OOD Generalization相关文章的阅读笔记☆31Updated 5 months ago
- Code for ICLR 2025 Paper "GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment"☆10Updated 3 months ago
- ☆27Updated last month
- ☆17Updated 4 months ago
- Toolkit for evaluating the trustworthiness of generative foundation models.☆102Updated 3 weeks ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆58Updated 10 months ago
- [ICLR 2025] Understanding and Enhancing Safety Mechanisms of LLMs via Safety-Specific Neuron☆14Updated last month
- 关于LLM和Multimodal LLM的paper list☆40Updated last week
- ☆52Updated last week
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 4 months ago
- The code repo of paper "X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Compromising Usa…☆28Updated 2 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆126Updated last month
- Official repository for 'Safety Challenges in Large Reasoning Models: A Survey' - Exploring safety risks, attacks, and defenses for Large…☆36Updated last week
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆26Updated 2 months ago
- [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆21Updated 2 weeks ago
- Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs☆24Updated last month
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆64Updated this week
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆19Updated this week
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated 7 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆69Updated last week
- ☆26Updated this week
- ☆38Updated 3 months ago
- Provide .bst files for NeurIPS latex template☆50Updated last month
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆24Updated 7 months ago
- A curated list of resources for activation engineering☆85Updated last week
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆68Updated 3 months ago