OpenGVLab / InternVideo
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
β1,810Updated last week
Alternatives and similar repositories for InternVideo:
Users that are interested in InternVideo are comparing it to the libraries listed below
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ803Updated last year
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understandingβ612Updated 4 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ391Updated last week
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Maskingβ617Updated 6 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,143Updated 2 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ607Updated 2 months ago
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understandingβ936Updated 9 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β799Updated 8 months ago
- VisionLLM Seriesβ1,041Updated last month
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Modelsβ327Updated 10 months ago
- EVA Series: Visual Representation Fantasies from BAAIβ2,468Updated 8 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ802Updated 8 months ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Trainingβ1,472Updated last year
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarksβ2,197Updated this week
- β3,686Updated last month
- Emu Series: Generative Multimodal Models from BAAIβ1,706Updated 6 months ago
- [ACL 2024 π₯] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capβ¦β1,337Updated 2 weeks ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"β934Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"β2,144Updated 2 months ago
- Official repository for the paper PLLaVAβ646Updated 8 months ago
- Multi-modality pre-trainingβ492Updated 11 months ago
- β775Updated 9 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β863Updated last month
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β783Updated 8 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,274Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ358Updated 4 months ago
- Grounded Language-Image Pre-trainingβ2,378Updated last year
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ523Updated this week
- γEMNLP 2024π₯γVideo-LLaVA: Learning United Visual Representation by Alignment Before Projectionβ3,225Updated 4 months ago
- An open-source implementaion for fine-tuning Qwen2-VL and Qwen2.5-VL series by Alibaba Cloud.β628Updated this week