Cooperx521 / ScaleCapLinks
(ICLR 2026)Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’
☆58Updated last week
Alternatives and similar repositories for ScaleCap
Users that are interested in ScaleCap are comparing it to the libraries listed below
Sorting:
- Official implement of MIA-DPO☆70Updated last year
- LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling☆186Updated last week
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆86Updated 6 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆118Updated 6 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆138Updated 8 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆60Updated 7 months ago
- TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMs☆97Updated last week
- The official code of "Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning"☆79Updated 3 months ago
- A collection of awesome think with videos papers.☆86Updated 2 months ago
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".☆83Updated 6 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆117Updated this week
- The code repository of UniRL☆51Updated 8 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆236Updated 5 months ago
- ICML2025☆63Updated 5 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆100Updated 10 months ago
- ☆132Updated 10 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆94Updated last year
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆114Updated last month
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆39Updated last week
- [NIPS 2025 DB Oral] Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆140Updated this week
- [ICLR'25] Reconstructive Visual Instruction Tuning☆135Updated 9 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆31Updated 5 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆204Updated 7 months ago
- ☆80Updated 7 months ago
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆182Updated 2 months ago
- Co-Reinforcement Learning for Unified Multimodal Understanding and Generation☆39Updated 6 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆134Updated 6 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆141Updated 10 months ago
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLM☆77Updated 9 months ago
- Official Code for "ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning"☆79Updated 2 months ago