PKU-YuanGroup / LanguageBindLinks
γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
β811Updated last year
Alternatives and similar repositories for LanguageBind
Users that are interested in LanguageBind are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ374Updated 3 weeks ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β809Updated 10 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ622Updated 4 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β883Updated 6 months ago
- A Framework of Small-scale Large Multimodal Modelsβ825Updated last month
- β778Updated 10 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Languageβ645Updated 7 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ1,894Updated last week
- Emu Series: Generative Multimodal Models from BAAIβ1,724Updated 8 months ago
- VisionLLM Seriesβ1,066Updated 3 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,170Updated 4 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ580Updated 7 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Modelβ331Updated 7 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ744Updated last year
- β613Updated last year
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ421Updated last week
- A family of lightweight multimodal models.β1,017Updated 6 months ago
- Official repository for the paper PLLaVAβ654Updated 10 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β527Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β520Updated 2 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β813Updated 9 months ago
- Mixture-of-Experts for Large Vision-Language Modelsβ2,173Updated 6 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".β278Updated 11 months ago
- β¨β¨Woodpecker: Hallucination Correction for Multimodal Large Language Modelsβ635Updated 5 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ821Updated 10 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understandingβ312Updated 10 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"β860Updated 3 weeks ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformerβ379Updated last month
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β619Updated last year
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ559Updated 3 weeks ago