KastanDay / video-pretrained-transformerLinks
Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scratch on YouTube (YT-1B dataset).
☆52Updated 2 years ago
Alternatives and similar repositories for video-pretrained-transformer
Users that are interested in video-pretrained-transformer are comparing it to the libraries listed below
Sorting:
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 6 months ago
- VideoLLM: Modeling Video Sequence with Large Language Models☆157Updated last year
- Graph learning framework for long-term video understanding☆65Updated this week
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆91Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension