LLM2CLIP significantly improves already state-of-the-art CLIP models.
☆640Feb 1, 2026Updated last month
Alternatives and similar repositories for LLM2CLIP
Users that are interested in LLM2CLIP are comparing it to the libraries listed below
Sorting:
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆597Updated this week
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 10 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,824Nov 27, 2025Updated 3 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆226Mar 20, 2025Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆892Aug 13, 2024Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆275Dec 10, 2025Updated 3 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,410Aug 4, 2025Updated 7 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,132Dec 12, 2025Updated 3 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,658Dec 4, 2025Updated 3 months ago
- When do we not need larger vision models?☆415Feb 8, 2025Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,652Aug 1, 2024Updated last year
- ☆4,591Sep 14, 2025Updated 6 months ago
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆282Jan 16, 2025Updated last year
- An open source implementation of CLIP.☆13,528Mar 12, 2026Updated last week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,380May 19, 2025Updated 10 months ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆301Jan 23, 2025Updated last year
- FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens☆17Sep 8, 2025Updated 6 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆103Mar 23, 2025Updated 11 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,990Nov 7, 2025Updated 4 months ago
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆33Jul 8, 2025Updated 8 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,879Sep 22, 2025Updated 5 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 7 months ago
- Grounded Language-Image Pre-training☆2,580Jan 24, 2024Updated 2 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- [EMNLP25 Main]The official code of "Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust Text-based Person Retrieval"☆22Mar 11, 2026Updated last week
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆33Jan 26, 2026Updated last month
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆40Apr 18, 2025Updated 11 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆91Nov 15, 2024Updated last year
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,204Mar 12, 2026Updated last week
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆392Jul 9, 2024Updated last year
- Data release for the ImageInWords (IIW) paper.☆227Nov 17, 2024Updated last year
- CLIP-like model evaluation☆806Jan 15, 2026Updated 2 months ago
- ☆58Apr 24, 2024Updated last year
- Codebase for Aria - an Open Multimodal Native MoE☆1,084Jan 22, 2025Updated last year
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,464Jun 26, 2025Updated 8 months ago
- 4M: Massively Multimodal Masked Modeling☆1,787Jun 2, 2025Updated 9 months ago
- ☆58Feb 27, 2025Updated last year