Mid-Push / SmartCLIPLinks
SmartCLIP: A training method to improve CLIP with both short and long texts
☆36Updated 6 months ago
Alternatives and similar repositories for SmartCLIP
Users that are interested in SmartCLIP are comparing it to the libraries listed below
Sorting:
- [AAAI2025 selected as oral] - Multi-task Visual Grounding with Coarse-to-Fine Consistency Constraints☆44Updated 6 months ago
- [CVPR2025] FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression☆59Updated 3 months ago
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆150Updated last month
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆56Updated 2 months ago
- [CVPR2025] Official implementation of the paper "Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practi…☆42Updated 2 months ago
- [NeurIPS2024] - SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion☆100Updated 2 months ago
- [CVPR 2025] DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perception☆148Updated this week
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆158Updated 3 weeks ago
- RefDrone: A Challenging Benchmark for Drone Scene Referring Expression Comprehension☆29Updated 3 weeks ago
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆28Updated last month
- [NeurIPS 2025] The official PyTorch implementation of the "Vision Function Layer in MLLM".☆25Updated 3 weeks ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆110Updated 3 weeks ago
- [ICLR2025] Text4Seg: Reimagining Image Segmentation as Text Generation☆157Updated 2 months ago
- code for FineLIP☆38Updated last month
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆49Updated this week
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆313Updated 8 months ago
- Visual Grounding with Multi-modal Conditional Adaptation (ACMMM 2024 Oral)☆26Updated 7 months ago
- [ACM MM 2024] Hierarchical Multimodal Fine-grained Modulation for Visual Grounding.☆59Updated 2 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆83Updated 10 months ago
- Official PyTorch Code for Anchor Token Guided Prompt Learning Methods: [ICCV 2025] ATPrompt and [Arxiv 2511.21188] AnchorOPT☆121Updated 3 weeks ago
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models☆54Updated 3 months ago
- [ICCV 2025] The official pytorch implement of "LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs".☆21Updated 2 months ago
- Official repository of the paper "A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models"☆84Updated 4 months ago
- ☆101Updated 5 months ago
- [ICCV2023] CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection☆17Updated 8 months ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆53Updated 4 months ago
- [CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: P…☆90Updated 6 months ago
- The official implementation of our paper ''IteRPrimE: Zero-shot Referring Image Segmentation with Iterative Grad-CAM Refinement and Prima…☆18Updated 9 months ago
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆37Updated 3 weeks ago
- An offical repo for ECCV 2024 Towards Natural Language-Guided Drones: GeoText-1652 Benchmark with Spatial Relation Matching☆108Updated 11 months ago