LLM2CLIP significantly improves already state-of-the-art CLIP models.
☆645Feb 1, 2026Updated 2 months ago
Alternatives and similar repositories for LLM2CLIP
Users that are interested in LLM2CLIP are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆619Mar 28, 2026Updated last week
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 11 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,828Nov 27, 2025Updated 4 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆224Mar 20, 2025Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆895Aug 13, 2024Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆274Dec 10, 2025Updated 3 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,412Aug 4, 2025Updated 8 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,131Dec 12, 2025Updated 3 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,667Dec 4, 2025Updated 4 months ago
- When do we not need larger vision models?☆418Feb 8, 2025Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,661Aug 1, 2024Updated last year
- ☆4,624Sep 14, 2025Updated 6 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆283Jan 16, 2025Updated last year
- An open source implementation of CLIP.☆13,658Updated this week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,408May 19, 2025Updated 10 months ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆301Jan 23, 2025Updated last year
- FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens☆17Sep 8, 2025Updated 7 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆104Mar 23, 2025Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,995Nov 7, 2025Updated 5 months ago
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆33Jul 8, 2025Updated 9 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,949Sep 22, 2025Updated 6 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 8 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- Grounded Language-Image Pre-training☆2,580Jan 24, 2024Updated 2 years ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆32Jan 26, 2026Updated 2 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆40Apr 18, 2025Updated 11 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,242Mar 12, 2026Updated 3 weeks ago
- Data release for the ImageInWords (IIW) paper.☆225Nov 17, 2024Updated last year
- CLIP-like model evaluation☆812Mar 19, 2026Updated 3 weeks ago
- ☆58Apr 24, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Codebase for Aria - an Open Multimodal Native MoE☆1,085Jan 22, 2025Updated last year
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,476Jun 26, 2025Updated 9 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆392Jul 9, 2024Updated last year
- 4M: Massively Multimodal Masked Modeling☆1,792Jun 2, 2025Updated 10 months ago
- [EMNLP25 Main]The official code of "Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust Text-based Person Retrieval"☆24Mar 30, 2026Updated last week
- ☆17Oct 1, 2024Updated last year
- KARL: Knowledge-Aware Reasoning and Reinforcement Learning for Knowledge-Intensive Visual Grounding☆66Apr 2, 2026Updated last week