microsoft / LLM2CLIPLinks
LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
☆567Updated last week
Alternatives and similar repositories for LLM2CLIP
Users that are interested in LLM2CLIP are comparing it to the libraries listed below
Sorting:
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆399Updated 2 weeks ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆390Updated last year
- When do we not need larger vision models?☆412Updated 10 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆332Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆246Updated last year
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆271Updated 10 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆503Updated last year
- ☆380Updated 10 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆504Updated 3 weeks ago
- [ICML 2025] Official PyTorch implementation of LongVU☆413Updated 7 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆273Updated 11 months ago
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆310Updated 6 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆874Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆408Updated 7 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆356Updated 5 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆577Updated last year
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆211Updated last year
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆504Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆681Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆929Updated 4 months ago
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆267Updated last month
- [ICML'24 Oral] "MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions"☆206Updated last year
- Long Context Transfer from Language to Vision☆398Updated 8 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆361Updated 4 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆209Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆357Updated 2 weeks ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆409Updated last week
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆291Updated 4 months ago
- ☆632Updated last year
- Official repository for the paper PLLaVA☆673Updated last year