TIGER-AI-Lab / VambaLinks
Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]
☆82Updated 3 weeks ago
Alternatives and similar repositories for Vamba
Users that are interested in Vamba are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆77Updated 8 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆83Updated 5 months ago
- Quick Long Video Understanding☆62Updated 2 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆122Updated 2 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆19Updated 5 months ago
- Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"☆61Updated last month
- ☆121Updated 2 months ago
- HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆63Updated 6 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆55Updated last month
- [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective☆70Updated 9 months ago
- ☆37Updated 6 months ago
- [Preprint] GMem: A Modular Approach for Ultra-Efficient Generative Models☆39Updated 5 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆38Updated 6 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆47Updated last month
- An open source implementation of CLIP (With TULIP Support)☆162Updated 3 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆174Updated 3 months ago
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆133Updated last month
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆48Updated last month
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆67Updated 10 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆66Updated last month
- Official Pytorch implementation for LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior (ICLR 2025 Oral).☆88Updated 6 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 6 months ago
- [CVPR2025 Highlight] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project☆171Updated 5 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆184Updated 2 months ago
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆207Updated 4 months ago
- The official implementation for "MonoFormer: One Transformer for Both Diffusion and Autoregression"☆86Updated 10 months ago
- ☆32Updated 4 months ago
- ☆87Updated 2 months ago
- ☆70Updated 2 months ago
- Official Implementation for our NeurIPS 2024 paper, "Don't Look Twice: Run-Length Tokenization for Faster Video Transformers".☆223Updated 4 months ago