OpenGVLab / InternVideo
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
β1,582Updated this week
Alternatives and similar repositories for InternVideo:
Users that are interested in InternVideo are comparing it to the libraries listed below
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Maskingβ568Updated 3 months ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ773Updated 10 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ574Updated 2 weeks ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Modelsβ312Updated 8 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ767Updated 5 months ago
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understandingβ893Updated 6 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"β904Updated 9 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,040Updated this week
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ327Updated 2 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β758Updated 6 months ago
- [ACL 2024 π₯] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capβ¦β1,277Updated 5 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,342Updated last month
- EVA Series: Visual Representation Fantasies from BAAIβ2,400Updated 5 months ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Trainingβ1,423Updated last year
- Grounded Language-Image Pre-trainingβ2,306Updated last year
- VisionLLM Seriesβ983Updated this week
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β943Updated 10 months ago
- Multi-modality pre-trainingβ479Updated 8 months ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understandingβ583Updated last month
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β818Updated 2 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,168Updated 7 months ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and languageβ1,297Updated last year
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ276Updated last week
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,007Updated 3 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β735Updated 5 months ago
- Official repository for the paper PLLaVAβ635Updated 6 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ562Updated 3 months ago
- π₯π₯π₯Latest Papers, Codes and Datasets on Vid-LLMs.β1,855Updated this week
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,238Updated 10 months ago
- Code release for "Learning Video Representations from Large Language Models"β503Updated last year