LLM2CLIP significantly improves already state-of-the-art CLIP models.
☆631Feb 1, 2026Updated 3 weeks ago
Alternatives and similar repositories for LLM2CLIP
Users that are interested in LLM2CLIP are comparing it to the libraries listed below
Sorting:
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆579Feb 11, 2026Updated 2 weeks ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,811Nov 27, 2025Updated 3 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆227Mar 20, 2025Updated 11 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆274Dec 10, 2025Updated 2 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,402Aug 4, 2025Updated 6 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆893Aug 13, 2024Updated last year
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,125Dec 12, 2025Updated 2 months ago
- When do we not need larger vision models?☆413Feb 8, 2025Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,368May 19, 2025Updated 9 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 7 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- ☆4,566Sep 14, 2025Updated 5 months ago
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆32Jul 8, 2025Updated 7 months ago
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,649Dec 4, 2025Updated 2 months ago
- An open source implementation of CLIP.☆13,397Feb 20, 2026Updated last week
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆299Jan 23, 2025Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 3 months ago
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆279Jan 16, 2025Updated last year
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆102Mar 23, 2025Updated 11 months ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆33Jan 26, 2026Updated last month
- 4M: Massively Multimodal Masked Modeling☆1,788Jun 2, 2025Updated 8 months ago
- Grounded Language-Image Pre-training☆2,572Jan 24, 2024Updated 2 years ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,817Sep 22, 2025Updated 5 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,177Feb 11, 2026Updated 2 weeks ago
- Codebase for Aria - an Open Multimodal Native MoE☆1,082Jan 22, 2025Updated last year
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆390Jul 9, 2024Updated last year
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆249Aug 12, 2025Updated 6 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,449Jun 26, 2025Updated 8 months ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,845Updated this week
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆86Oct 26, 2025Updated 4 months ago
- Next-Token Prediction is All You Need☆2,350Jan 12, 2026Updated last month
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Dec 1, 2024Updated last year
- CLIP-like model evaluation☆802Jan 15, 2026Updated last month
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Jun 10, 2025Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,665Feb 11, 2026Updated 2 weeks ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆40Apr 18, 2025Updated 10 months ago