diaoquesang / Code-in-Paper-GuideLinks
🌟 手把手教你在论文中插入代码链接
☆24Updated 6 months ago
Alternatives and similar repositories for Code-in-Paper-Guide
Users that are interested in Code-in-Paper-Guide are comparing it to the libraries listed below
Sorting:
- Code for paper: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models☆51Updated last year
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆166Updated last month
- Official PyTorch Code for Anchor Token Guided Prompt Learning Methods: [ICCV 2025] ATPrompt and [Arxiv 2511.21188] AnchorOPT☆122Updated last week
- Code for Sam-Guided Enhanced Fine-Grained Encoding with Mixed Semantic Learning for Medical Image Captioning☆16Updated last year
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆347Updated last month
- The official repository of the paper 'Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine'☆117Updated last year
- [MICCAI 2024] Can LLMs' Tuning Methods Work in Medical Multimodal Domain?☆17Updated last year
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆335Updated 9 months ago
- [CVPR2025] Code Release of Patch Matters: Training-free Fine-grained Image Caption Enhancement via Local Perception☆20Updated 7 months ago
- ☆18Updated 7 months ago
- A curated list of publications on image and video segmentation leveraging Multimodal Large Language Models (MLLMs), highlighting state-of…☆188Updated 2 weeks ago
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆60Updated 3 months ago
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆34Updated 2 months ago
- [ACL'25 Main] Official Implementation of HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of Multimodal Large Languag…☆47Updated 4 months ago
- [WACV 2025] Code for Enhancing Vision-Language Few-Shot Adaptation with Negative Learning☆11Updated 11 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆95Updated 9 months ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆283Updated 2 years ago
- [CVPR2025] Exploring CLIP’s Dense Knowledge for Weakly Supervised Semantic Segmentation☆65Updated 7 months ago
- [ICML2024] Official PyTorch implementation of CoMC: Language-Driven Cross-Modal Classifier for Zero-Shot Multi-Label Image Recognition☆16Updated last year
- [CVPR 2025] Hybrid Global-Local Representation with Augmented Spatial Guidance for Zero-Shot Referring Image Segmentation☆29Updated 7 months ago
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆47Updated 10 months ago
- [CVPR 2025] Understanding Fine-tuning CLIP for Open-vocabulary Semantic Segmentation in Hyperbolic Space☆36Updated 6 months ago
- ✨ [AAAI 2025] Queryable Prototype Multiple Instance Learning with Vision-Language Models for Incremental Whole Slide Image Classification☆52Updated 9 months ago
- Detecting and Evaluating Medical Hallucinations in Large Vision Language Models☆11Updated last year
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆89Updated 11 months ago
- ☆49Updated 11 months ago
- [AAAI2024] Official implementation of TGP-T☆33Updated last year
- Easy wrapper for inserting LoRA layers in CLIP.☆40Updated last year
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆91Updated last year
- [CVPR'25 Oral] LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models☆48Updated 5 months ago