Mid-Push / SmartCLIPLinks
SmartCLIP: A training method to improve CLIP with both short and long texts
☆36Updated 6 months ago
Alternatives and similar repositories for SmartCLIP
Users that are interested in SmartCLIP are comparing it to the libraries listed below
Sorting:
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆148Updated 3 weeks ago
- [CVPR2025] FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression☆59Updated 3 months ago
- [AAAI2025 selected as oral] - Multi-task Visual Grounding with Coarse-to-Fine Consistency Constraints☆43Updated 6 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆110Updated 2 weeks ago
- [ICCV 2025] The official pytorch implement of "LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs".☆21Updated 2 months ago
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆55Updated 2 months ago
- [NeurIPS 2025] The official PyTorch implementation of the "Vision Function Layer in MLLM".☆25Updated 3 weeks ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆157Updated 3 weeks ago
- [NeurIPS2024] - SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion☆100Updated 2 months ago
- [ICCV2025] PropVG: End-to-End Proposal-Driven Visual Grounding with Multi-Granularity Discrimination☆32Updated 2 months ago
- [CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: P…☆90Updated 6 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆311Updated 8 months ago
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆27Updated last month
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆82Updated 10 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆49Updated this week
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆53Updated 4 months ago
- [CVPR2025] Official implementation of the paper "Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practi…☆42Updated 2 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆78Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆202Updated 5 months ago
- Official PyTorch Code for Anchor Token Guided Prompt Learning Methods: [ICCV 2025] ATPrompt and [Arxiv 2511.21188] AnchorOPT☆121Updated 3 weeks ago
- code for FineLIP☆38Updated last month
- RefDrone: A Challenging Benchmark for Drone Scene Referring Expression Comprehension☆29Updated 2 weeks ago
- 【AAAI2025】DeMo: Decoupled Feature-Based Mixture of Experts for Multi-Modal Object Re-Identification☆65Updated 10 months ago
- ☆12Updated 7 months ago
- [ICLR2025] Text4Seg: Reimagining Image Segmentation as Text Generation☆157Updated 2 months ago
- ☆101Updated 4 months ago
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆37Updated 3 weeks ago
- Official code repository of Shuffle-R1☆25Updated 4 months ago
- Visual Grounding with Multi-modal Conditional Adaptation (ACMMM 2024 Oral)☆26Updated 7 months ago
- Official repository of the paper "A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models"☆84Updated 4 months ago