yangjie-cv / WeThinkLinks
WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning
β28Updated last month
Alternatives and similar repositories for WeThink
Users that are interested in WeThink are comparing it to the libraries listed below
Sorting:
- β30Updated 11 months ago
- πΎ E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)β59Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selectionβ93Updated last month
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedbackβ67Updated 10 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learningβ69Updated 5 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025β58Updated 4 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?β74Updated 3 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generationβ86Updated 10 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Mindsβ94Updated last year
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"β29Updated 3 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuningβ97Updated 3 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistantβ59Updated last year
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"β58Updated 2 weeks ago
- β58Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perceptionβ147Updated 7 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Wantβ82Updated last month
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmarkβ113Updated last month
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".β58Updated 2 years ago
- β133Updated last year
- β76Updated 7 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, β¦β118Updated 3 months ago
- β91Updated last year
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generationβ76Updated 4 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!β128Updated last year
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Modelβ31Updated 7 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compressionβ60Updated 4 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ47Updated 4 months ago
- Official repository of MMDU datasetβ92Updated 9 months ago
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLMβ67Updated 2 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Insβ¦β19Updated last year