Wakals / CoVTLinks
Official repo of "Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens"
☆229Updated 2 weeks ago
Alternatives and similar repositories for CoVT
Users that are interested in CoVT are comparing it to the libraries listed below
Sorting:
- Multimodal Referring Segmentation☆195Updated last month
- [ACM MM-2024] RefMask3D: Language-Guided Transformer for 3D Referring Segmentation☆66Updated last year
- [CVPR-2024] Decoupling Static and Hierarchical Motion Perception for Referring Video Segmentation☆86Updated last year
- A Survey of Image Editing☆456Updated 4 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆257Updated last month
- ☆133Updated 9 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆414Updated last year
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆150Updated last week
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆252Updated 2 months ago
- Official PyTorch Code of ReKV (ICLR'25)☆78Updated last month
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆215Updated 4 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆201Updated 5 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆153Updated 9 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆109Updated this week
- Collections of Papers and Projects for Multimodal Reasoning.☆106Updated 8 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆133Updated 8 months ago
- [ICCV 2025] MOVE: Motion-Guided Few-Shot Video Object Segmentation☆84Updated 3 months ago
- Official repository for VisionZip (CVPR 2025)☆392Updated 5 months ago
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆104Updated 3 weeks ago
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".☆76Updated 5 months ago
- R1-like Video-LLM for Temporal Grounding☆130Updated 6 months ago
- [NeurIPS'25] Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding☆68Updated 2 weeks ago
- Awesome paper for multi-modal llm with grounding ability☆19Updated 2 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆135Updated 2 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆93Updated 3 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆100Updated 5 months ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆53Updated 3 months ago
- A benchmark dataset for GRES and GREC [CVPR2023 Highlight]☆240Updated last month
- [AAAI 2025] AL-Ref-SAM 2: Unleashing the Temporal-Spatial Reasoning Capacity of GPT for Training-Free Audio and Language Referenced Video…☆91Updated last year
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆199Updated last year