beichenzbc / Long-CLIP
[ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"
β796Updated 8 months ago
Alternatives and similar repositories for Long-CLIP:
Users that are interested in Long-CLIP are comparing it to the libraries listed below
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ577Updated 6 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β866Updated 5 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ809Updated 8 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ595Updated 6 months ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Betterβ273Updated 3 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).β610Updated 7 months ago
- β328Updated last year
- When do we not need larger vision models?β388Updated 2 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ319Updated 9 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β506Updated last month
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretrainiβ¦β588Updated 3 weeks ago
- Official repository for the paper PLLaVAβ647Updated 8 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasksβ385Updated 9 months ago
- Multimodal Models in Real Worldβ493Updated 2 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editingβ526Updated 6 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ360Updated 5 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformerβ374Updated this week
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captionsβ214Updated 9 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ434Updated 4 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadinβ¦β220Updated 6 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ528Updated last week
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β481Updated 8 months ago
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β520Updated 2 weeks ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,β¦β290Updated 2 months ago
- [Neurips 2023 & TPAMI] T2I-CompBench (++) for Compositional Text-to-image Generation Evaluationβ250Updated 2 weeks ago
- This repo contains the code for 1D tokenizer and generatorβ838Updated last month
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β802Updated 8 months ago
- PyTorch implementation of InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions.β425Updated 11 months ago
- Long Context Transfer from Language to Visionβ373Updated last month
- My implementation of "Patch nβ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"β228Updated 3 weeks ago