beichenzbc / Long-CLIPLinks
[ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"
β889Updated last year
Alternatives and similar repositories for Long-CLIP
Users that are interested in Long-CLIP are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ866Updated 6 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β943Updated 6 months ago
- Official repository for the paper PLLaVAβ677Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ602Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β617Updated last week
- [ICLR 2025] Diffusion Feedback Helps CLIP See Betterβ299Updated last year
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ671Updated last year
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editingβ579Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).β639Updated last year
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ501Updated 2 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretrainiβ¦β637Updated 3 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captionsβ248Updated last year
- Multimodal Models in Real Worldβ555Updated 11 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"β597Updated 3 weeks ago
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β796Updated 3 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasksβ390Updated last year
- When do we not need larger vision models?β412Updated last year
- β359Updated 2 years ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.β1,865Updated last month
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β859Updated last year
- [ECCV 2024] Tokenize Anything via Promptingβ602Updated last year
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMsβ413Updated last month
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Modelβ342Updated last year
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"β495Updated 10 months ago
- [CVPR 2024] Code release for "InstanceDiffusion: Instance-level Control for Image Generation"β607Updated 7 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ409Updated 9 months ago
- New generation of CLIP with fine grained discrimination capability, ICML2025β543Updated 3 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ682Updated last year
- [ICML 2025] Official PyTorch implementation of LongVUβ421Updated 9 months ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ865Updated last year