OpenGVLab / VideoMambaLinks
[ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding
☆983Updated last year
Alternatives and similar repositories for VideoMamba
Users that are interested in VideoMamba are comparing it to the libraries listed below
Sorting:
- The suite of modeling video with Mamba☆279Updated last year
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆669Updated 9 months ago
- Implementation of Vision Mamba from the paper: "Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Mod…☆461Updated last week
- [CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆1,626Updated last week
- [ICLR 2025 Spotlight] Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures☆485Updated 5 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,984Updated last month
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,552Updated last year
- Official repository of Agent Attention (ECCV2024)☆634Updated 8 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆833Updated 2 weeks ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,008Updated last year
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆449Updated last month
- Curated list of video object segmentation (VOS) papers, datasets, and projects.☆358Updated this week
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆637Updated 6 months ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆336Updated last year
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆320Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆384Updated 2 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆819Updated last year
- A curated list of awesome self-supervised learning methods in videos☆149Updated 2 weeks ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆867Updated last week
- PyTorch implementation of RCG https://arxiv.org/abs/2312.03701☆918Updated 10 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆898Updated last month
- [Official Repo] Visual Mamba: A Survey and New Outlooks☆696Updated 5 months ago
- ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in co…☆954Updated 11 months ago
- [Mamba-Survey-2024] Paper list for State-Space-Model/Mamba and it's Applications☆722Updated last month
- VisionLLM Series☆1,094Updated 5 months ago
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,520Updated 5 months ago
- VMamba: Visual State Space Models,code is based on mamba☆2,734Updated 4 months ago
- Awesome Papers related to Mamba.☆1,369Updated 9 months ago
- xLSTM as Generic Vision Backbone☆482Updated 8 months ago
- [ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer☆324Updated last year