aimagelab / HySACLinks
Hyperbolic Safety-Aware Vision-Language Models. CVPR 2025
☆25Updated 8 months ago
Alternatives and similar repositories for HySAC
Users that are interested in HySAC are comparing it to the libraries listed below
Sorting:
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆46Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆57Updated last year
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆41Updated last year
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆60Updated 2 months ago
- Chain-of-Frames Reasoning Traces☆34Updated 5 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆39Updated last year
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆57Updated last year
- Official pytorch implementation of "RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language…☆14Updated last year
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆51Updated 2 months ago
- ☆22Updated last year
- Code for paper: Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection☆48Updated 9 months ago
- VisualGPTScore for visio-linguistic reasoning☆27Updated 2 years ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆107Updated last year
- ☆22Updated last year
- [ICLR 2025] SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image and Video Generation☆47Updated 11 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆32Updated last year
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models☆45Updated 5 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆82Updated 10 months ago
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 9 months ago
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite fo…☆50Updated last year
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆47Updated 11 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆20Updated last year
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆21Updated 7 months ago
- Awesome Vision-Language Compositionality, a comprehensive curation of research papers in literature.☆34Updated 10 months ago
- 🌋👵🏻 Yo'LLaVA: Your Personalized Language and Vision Assistant (NeurIPS 2024)☆118Updated 9 months ago
- cliptrase☆47Updated last year
- A simple pytorch implementation of baseline based-on CLIP for Image-text Matching.☆17Updated 2 years ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆51Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆75Updated 5 months ago
- NegCLIP.☆38Updated 2 years ago