microsoft / LLM2CLIP
LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
☆495Updated last week
Alternatives and similar repositories for LLM2CLIP:
Users that are interested in LLM2CLIP are comparing it to the libraries listed below
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆370Updated this week
- When do we not need larger vision models?☆383Updated last month
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆776Updated 7 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆316Updated 8 months ago
- Long Context Transfer from Language to Vision☆369Updated 2 weeks ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆385Updated 8 months ago
- LLaVA-Interactive-Demo☆367Updated 8 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆234Updated 7 months ago
- Rethinking Step-by-step Visual Reasoning in LLMs☆282Updated 2 months ago
- [CVPR 2025] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive…☆238Updated 2 months ago
- ☆322Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆478Updated 7 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆856Updated 4 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆237Updated 3 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆211Updated 9 months ago
- ☆344Updated last month
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆496Updated last week
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆404Updated last week
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆270Updated 2 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆517Updated 5 months ago
- ☆366Updated last month
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆576Updated last year
- Official repository for the paper PLLaVA☆644Updated 8 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆443Updated 2 weeks ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆202Updated 9 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆794Updated 8 months ago
- Quick exploration into fine tuning florence 2☆305Updated 6 months ago
- MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning☆459Updated this week
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆353Updated 4 months ago
- VisionLLM Series☆1,036Updated last month