bytedance / tarsier
Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with good capability of general video understanding.
☆363Updated 3 weeks ago
Alternatives and similar repositories for tarsier
Users that are interested in tarsier are comparing it to the libraries listed below
Sorting:
- Long Context Transfer from Language to Vision☆374Updated last month
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆405Updated this week
- [ICML 2025] Official PyTorch implementation of LongVU☆370Updated last week
- 🔥🔥First-ever hour scale video understanding models☆314Updated 3 weeks ago
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"☆181Updated 4 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆515Updated this week
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆367Updated last week
- Official repository for the paper PLLaVA☆649Updated 9 months ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆218Updated 8 months ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆615Updated 5 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆271Updated last month
- Multimodal Models in Real World☆503Updated 2 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆602Updated 6 months ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆547Updated last week
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆376Updated 3 weeks ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆795Updated 3 weeks ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆806Updated 9 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆529Updated this week
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆350Updated last week
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆326Updated 2 months ago
- Official repo for paper "MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions"☆438Updated 8 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆534Updated 6 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆237Updated 9 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆319Updated 9 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆580Updated 7 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆512Updated last month
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆619Updated 3 months ago
- ☆186Updated 10 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆328Updated 6 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆386Updated 10 months ago