microsoft / LLM2CLIP
LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
☆506Updated last month
Alternatives and similar repositories for LLM2CLIP:
Users that are interested in LLM2CLIP are comparing it to the libraries listed below
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆374Updated this week
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆236Updated 8 months ago
- When do we not need larger vision models?☆388Updated 2 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆319Updated 9 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆385Updated 9 months ago
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆240Updated 3 months ago
- Rethinking Step-by-step Visual Reasoning in LLMs☆289Updated 3 months ago
- Long Context Transfer from Language to Vision☆373Updated last month
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆866Updated 5 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆322Updated last month
- ☆354Updated 2 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆340Updated last month
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆577Updated 6 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆796Updated 8 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆481Updated 8 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆417Updated 3 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆203Updated 10 months ago
- Official repository for the paper PLLaVA☆647Updated 8 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆300Updated 9 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆214Updated 9 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆241Updated 4 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆751Updated 8 months ago
- ☆369Updated last month
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆356Updated this week
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆273Updated 3 months ago
- ☆328Updated last year
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆526Updated 6 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆360Updated 5 months ago
- ☆610Updated last year
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆228Updated 3 weeks ago