qiulu66 / EgoPlan-Bench2View external linksLinks
☆27Apr 11, 2025Updated 10 months ago
Alternatives and similar repositories for EgoPlan-Bench2
Users that are interested in EgoPlan-Bench2 are comparing it to the libraries listed below
Sorting:
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆162Oct 1, 2025Updated 4 months ago
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆86Feb 27, 2025Updated 11 months ago
- Turning to Video for Transcript Sorting☆49Aug 27, 2023Updated 2 years ago
- ☆97Jun 23, 2025Updated 7 months ago
- ☆81Jun 23, 2025Updated 7 months ago
- [AAAI 2026] GenMAC for Compositional Text-to-Video Generation☆32Jan 10, 2026Updated last month
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Jun 20, 2023Updated 2 years ago
- ☆37Sep 16, 2024Updated last year
- A small repository demonstrating the use of Webdataset and Imagenet☆17Dec 19, 2023Updated 2 years ago
- ☆47Apr 20, 2025Updated 9 months ago
- This is the official impletations of the EMNLP Findings paper, VideoINSTA: Zero-shot Long-Form Video Understanding via Informative Spatia…☆24Nov 15, 2024Updated last year
- ☆40Jun 6, 2025Updated 8 months ago
- ☆19Dec 6, 2023Updated 2 years ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆79Jun 17, 2024Updated last year
- ☆23Aug 17, 2024Updated last year
- Structured Video Comprehension of Real-World Shorts☆229Sep 21, 2025Updated 4 months ago
- Pytorch implementation for Egoinstructor at CVPR 2024☆28Dec 1, 2024Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Jun 27, 2023Updated 2 years ago
- ☆26Mar 20, 2023Updated 2 years ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆236Aug 18, 2025Updated 5 months ago
- Official implementation of PARIS3D (Accepted to ECCV 2024).☆27Sep 25, 2024Updated last year
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆190Feb 3, 2025Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆360Jan 14, 2025Updated last year
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆32May 15, 2023Updated 2 years ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆79Aug 26, 2025Updated 5 months ago
- [ICCV2025] TokenBridge: Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation. https://yuqingwang1029.github.io/To…☆151Jul 24, 2025Updated 6 months ago
- Egocentric Video Understanding Dataset (EVUD)☆33Jul 4, 2024Updated last year
- ☆65Jan 7, 2026Updated last month
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆39Mar 16, 2025Updated 11 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Sep 15, 2025Updated 5 months ago
- [CVPR 2022] Multi-View Transformer for 3D Visual Grounding☆80Nov 9, 2022Updated 3 years ago
- [ICCV 2023] Multi3DRefer: Grounding Text Description to Multiple 3D Objects☆94Oct 18, 2025Updated 3 months ago
- [SCIS] MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images☆44Nov 19, 2025Updated 2 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆86Mar 21, 2024Updated last year
- ☆141Oct 15, 2025Updated 4 months ago
- Offical repository of DriveWorld-VLA☆25Feb 1, 2026Updated 2 weeks ago
- The code for the paper "A Bayesian Approach to Online Planning" published in ICML 2024.☆13Jun 17, 2024Updated last year
- ☆46Nov 8, 2024Updated last year
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Apr 11, 2025Updated 10 months ago