bytedance / Portrait-Mode-VideoLinks
Video dataset dedicated to portrait-mode video recognition.
☆52Updated 2 weeks ago
Alternatives and similar repositories for Portrait-Mode-Video
Users that are interested in Portrait-Mode-Video are comparing it to the libraries listed below
Sorting:
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆38Updated 5 months ago
- Narrative movie understanding benchmark☆76Updated 4 months ago
- ☆155Updated 9 months ago
- [NeurIPS 2023 Datasets and Benchmarks] "FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation", Yuanxin L…☆56Updated last year
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆79Updated last year
- Official repo for StableLLAVA☆94Updated last year
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆35Updated 4 months ago
- ☆78Updated 7 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- ☆130Updated 2 weeks ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆163Updated last year
- [CVPR 2025] A Hierarchical Movie Level Dataset for Long Video Generation☆72Updated 7 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 6 months ago
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆70Updated last year
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆81Updated last month
- ☆196Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆129Updated 4 months ago
- ☆72Updated last year
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆104Updated 10 months ago
- ☆57Updated last year
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated last year
- Supercharged BLIP-2 that can handle videos☆122Updated last year
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- ☆56Updated 6 months ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆36Updated 3 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆91Updated 7 months ago
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆90Updated last year