bytedance / Portrait-Mode-VideoLinks
Video dataset dedicated to portrait-mode video recognition.
☆52Updated 8 months ago
Alternatives and similar repositories for Portrait-Mode-Video
Users that are interested in Portrait-Mode-Video are comparing it to the libraries listed below
Sorting:
- ☆78Updated 5 months ago
- ☆121Updated 2 months ago
- ☆155Updated 7 months ago
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆36Updated 3 months ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆160Updated 11 months ago
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆67Updated 10 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 4 months ago
- ☆72Updated last year
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆124Updated 2 months ago
- Official repo for StableLLAVA☆95Updated last year
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated 11 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated 10 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆45Updated 2 weeks ago
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆34Updated 2 months ago
- ☆187Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆27Updated last year
- Narrative movie understanding benchmark☆77Updated 2 months ago
- [CVPR 2025] InstanceCap: Improving Text-to-Video Generation via Instance-aware Structured Caption 🔍☆45Updated last month
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆33Updated 9 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆83Updated 5 months ago
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆102Updated 8 months ago
- Structured Video Comprehension of Real-World Shorts☆177Updated 3 weeks ago
- [NeurIPS 2023 Datasets and Benchmarks] "FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation", Yuanxin L…☆54Updated last year
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆38Updated last month
- [CVPR 2025] A Hierarchical Movie Level Dataset for Long Video Generation☆67Updated 5 months ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆32Updated last month
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆85Updated last year
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆74Updated this week