OpenGVLab / VideoMambaLinks
[ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding
☆955Updated 10 months ago
Alternatives and similar repositories for VideoMamba
Users that are interested in VideoMamba are comparing it to the libraries listed below
Sorting:
- The suite of modeling video with Mamba☆265Updated last year
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆641Updated 7 months ago
- [CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆1,435Updated 2 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆985Updated last year
- Implementation of Vision Mamba from the paper: "Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Mod…☆455Updated this week
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,505Updated last year
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆818Updated 10 months ago
- [ICLR 2025 Spotlight] Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures☆464Updated 3 months ago
- [Official Repo] Visual Mamba: A Survey and New Outlooks☆668Updated 3 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,887Updated this week
- Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)☆566Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,166Updated this week
- Official repository of Agent Attention (ECCV2024)☆619Updated 6 months ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆852Updated 2 months ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆328Updated last year
- VMamba: Visual State Space Models,code is based on mamba☆2,622Updated 2 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆883Updated 6 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆309Updated 10 months ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆425Updated 2 years ago
- Awesome Papers related to Mamba.☆1,358Updated 7 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,054Updated 11 months ago
- PyTorch implementation of RCG https://arxiv.org/abs/2312.03701☆915Updated 8 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆420Updated this week
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆372Updated 3 weeks ago
- Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"☆338Updated 6 months ago
- The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"☆213Updated last year
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆384Updated 10 months ago
- ☆517Updated 6 months ago
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆326Updated 5 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆810Updated last year