OpenGVLab / FluxViTLinks
Make Your Training Flexible: Towards Deployment-Efficient Video Models
☆30Updated 2 weeks ago
Alternatives and similar repositories for FluxViT
Users that are interested in FluxViT are comparing it to the libraries listed below
Sorting:
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model☆66Updated 5 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆51Updated 5 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆119Updated 3 months ago
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆20Updated 8 months ago
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆153Updated 2 weeks ago
- [ICML 2025] VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆27Updated 2 weeks ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago
- Official implementation of "TextRegion: Text-Aligned Region Tokens from Frozen Image-Text Models"☆33Updated 3 weeks ago
- ☆84Updated 2 months ago
- Official implementation of Add-SD: Rational Generation without Manual Reference.☆27Updated 10 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆46Updated 5 months ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 10 months ago
- An open source implementation of CLIP (With TULIP Support)☆157Updated last month
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆116Updated 3 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆50Updated last year
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆75Updated 3 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆120Updated this week
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆111Updated 3 weeks ago
- ☆64Updated 2 months ago
- [arXiv'25] Official Implementation of "Pix2Cap-COCO: Advancing Visual Comprehension via Pixel-Level Captioning"☆17Updated 5 months ago
- [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective☆69Updated 7 months ago
- Official PyTorch implementation of "No Time to Waste: Squeeze Time into Channel for Mobile Video Understanding"☆33Updated last year
- ☆76Updated 3 months ago
- ☆97Updated 10 months ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆134Updated last year
- Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model☆97Updated 2 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆205Updated 5 months ago
- ☆26Updated last year
- Official Implementation for our NeurIPS 2024 paper, "Don't Look Twice: Run-Length Tokenization for Faster Video Transformers".☆217Updated 2 months ago
- ☆34Updated last year