LLM2CLIP significantly improves already state-of-the-art CLIP models.
☆651Feb 1, 2026Updated 2 months ago
Alternatives and similar repositories for LLM2CLIP
Users that are interested in LLM2CLIP are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆635Updated this week
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 11 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,833Nov 27, 2025Updated 5 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆224Mar 20, 2025Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆897Aug 13, 2024Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆275Dec 10, 2025Updated 4 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,415Aug 4, 2025Updated 8 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,134Dec 12, 2025Updated 4 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,681Apr 4, 2026Updated 3 weeks ago
- When do we not need larger vision models?☆418Feb 8, 2025Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,669Aug 1, 2024Updated last year
- ☆4,640Apr 15, 2026Updated 2 weeks ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆285Jan 16, 2025Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,430May 19, 2025Updated 11 months ago
- An open source implementation of CLIP.☆13,728Apr 20, 2026Updated last week
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆301Jan 23, 2025Updated last year
- FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens☆17Sep 8, 2025Updated 7 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆104Mar 23, 2025Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,995Nov 7, 2025Updated 5 months ago
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆33Jul 8, 2025Updated 9 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,983Sep 22, 2025Updated 7 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 9 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- Grounded Language-Image Pre-training☆2,588Jan 24, 2024Updated 2 years ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆32Jan 26, 2026Updated 3 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆40Apr 18, 2025Updated last year
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆92Nov 15, 2024Updated last year
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,260Apr 13, 2026Updated 2 weeks ago
- Data release for the ImageInWords (IIW) paper.☆225Nov 17, 2024Updated last year
- CLIP-like model evaluation☆813Mar 19, 2026Updated last month
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆58Apr 24, 2024Updated 2 years ago
- Codebase for Aria - an Open Multimodal Native MoE☆1,086Jan 22, 2025Updated last year
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,484Jun 26, 2025Updated 10 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆392Jul 9, 2024Updated last year
- 4M: Massively Multimodal Masked Modeling☆1,794Jun 2, 2025Updated 10 months ago
- [EMNLP25 Main]The official code of "Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust Text-based Person Retrieval"☆26Mar 30, 2026Updated 3 weeks ago
- ☆17Oct 1, 2024Updated last year