diaoquesang / Code-in-Paper-GuideLinks
🌟 手把手教你在论文中插入代码链接
☆22Updated 3 months ago
Alternatives and similar repositories for Code-in-Paper-Guide
Users that are interested in Code-in-Paper-Guide are comparing it to the libraries listed below
Sorting:
- Code for paper: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models☆34Updated 10 months ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆103Updated last week
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆335Updated 2 months ago
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆107Updated 2 months ago
- [WACV 2025] Code for Enhancing Vision-Language Few-Shot Adaptation with Negative Learning☆10Updated 8 months ago
- [CVPR'25 Oral] LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models☆38Updated 2 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆284Updated 6 months ago
- [ICLR 2025] Official Implementation of Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection☆47Updated 3 months ago
- [MICCAI 2024] Can LLMs' Tuning Methods Work in Medical Multimodal Domain?☆17Updated last year
- [ACL'25 Main] Official Implementation of HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of Multimodal Large Languag…☆37Updated last month
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆45Updated 7 months ago
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆49Updated last week
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆84Updated 6 months ago
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆30Updated last month
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆276Updated 2 years ago
- Easy wrapper for inserting LoRA layers in CLIP.☆40Updated last year
- ☆48Updated 8 months ago
- [CVPR2025] Official implementation of the paper "Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practi…☆37Updated this week
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆195Updated 3 months ago
- Code for Sam-Guided Enhanced Fine-Grained Encoding with Mixed Semantic Learning for Medical Image Captioning☆16Updated last year
- The official repository of the paper 'Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine'☆100Updated 9 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆82Updated last year
- [TPAMI 2025] Generalized Semantic Contrastive Learning via Embedding Side Information for Few-Shot Object Detection☆35Updated 4 months ago
- [AAAI2024] Official implementation of TGP-T☆29Updated last year
- An easy way to apply LoRA to CLIP. Implementation of the paper "Low-Rank Few-Shot Adaptation of Vision-Language Models" (CLIP-LoRA) [CVPR…☆262Updated 4 months ago
- A curated list of publications on image and video segmentation leveraging Multimodal Large Language Models (MLLMs), highlighting state-of…☆145Updated this week
- The official repos of "Knowledge Bridger: Towards Training-Free Missing Modality Completion"☆18Updated 4 months ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆91Updated 3 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆65Updated 8 months ago
- Official implementations of our LaZSL (ICCV'25)☆36Updated 3 months ago