Mid-Push / SmartCLIPLinks
SmartCLIP: A training method to improve CLIP with both short and long texts
☆30Updated 6 months ago
Alternatives and similar repositories for SmartCLIP
Users that are interested in SmartCLIP are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] The official pytorch implement of "LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs".☆21Updated last month
- [CVPR2025] FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression☆58Updated 2 months ago
- [CVPR2025] Official implementation of the paper "Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practi…☆41Updated last month
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆145Updated last week
- [NeurIPS2024] - SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion☆100Updated last month
- [AAAI2025 selected as oral] - Multi-task Visual Grounding with Coarse-to-Fine Consistency Constraints☆42Updated 5 months ago
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆36Updated this week
- RefDrone: A Challenging Benchmark for Drone Scene Referring Expression Comprehension☆27Updated last week
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆148Updated 3 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆202Updated 5 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆308Updated 8 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆44Updated this week
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆51Updated 3 months ago
- [CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: P…☆87Updated 5 months ago
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆55Updated last month
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆109Updated 6 months ago
- Official code repository of Shuffle-R1☆25Updated 3 months ago
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆25Updated last week
- Some experiences for new researchers to grow grow up☆43Updated 2 years ago
- Visual Grounding with Multi-modal Conditional Adaptation (ACMMM 2024 Oral)☆26Updated 6 months ago
- [ICLR2025] Text4Seg: Reimagining Image Segmentation as Text Generation☆154Updated last month
- [CVPR 2025] Hybrid Global-Local Representation with Augmented Spatial Guidance for Zero-Shot Referring Image Segmentation☆28Updated 5 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆81Updated 5 months ago
- [ICCV2023] CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection☆17Updated 7 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆79Updated 10 months ago
- [CVPR 2024] Code for HiKER-SGG: Hierarchical Knowledge Enhanced Robust Scene Graph Generation☆75Updated last year
- Official repository of the paper "A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models"☆81Updated 3 months ago
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding☆23Updated 9 months ago
- [CVPR 2025] DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perception☆146Updated 6 months ago
- OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models☆42Updated 2 weeks ago