bytedance / Portrait-Mode-VideoLinks
Video dataset dedicated to portrait-mode video recognition.
☆52Updated 7 months ago
Alternatives and similar repositories for Portrait-Mode-Video
Users that are interested in Portrait-Mode-Video are comparing it to the libraries listed below
Sorting:
- ☆71Updated last year
- ☆76Updated 4 months ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆156Updated 9 months ago
- ☆154Updated 6 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 3 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆113Updated last month
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆72Updated 9 months ago
- Narrative movie understanding benchmark☆73Updated last month
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- ☆88Updated 3 weeks ago
- ☆188Updated last year
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆63Updated 9 months ago
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆35Updated last month
- Official repo for StableLLAVA☆95Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆26Updated last year
- ☆87Updated last year
- Supercharged BLIP-2 that can handle videos☆118Updated last year
- ☆58Updated last year
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆102Updated 7 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆37Updated last year
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆71Updated 4 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆105Updated 10 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆70Updated 4 months ago
- [CVPR 2025] InstanceCap: Improving Text-to-Video Generation via Instance-aware Structured Caption 🔍☆44Updated last week
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆28Updated last month
- [CVPR 2025] A Hierarchical Movie Level Dataset for Long Video Generation☆62Updated 4 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- [NeurIPS 2023 Datasets and Benchmarks] "FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation", Yuanxin L…☆54Updated last year
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year