path2generalist / General-LevelLinks
On Path to Multimodal Generalist: General-Level and General-Bench
☆19Updated 2 months ago
Alternatives and similar repositories for General-Level
Users that are interested in General-Level are comparing it to the libraries listed below
Sorting:
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆73Updated 2 months ago
- ☆39Updated 4 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆19Updated 7 months ago
- ☆75Updated 3 months ago
- [NeurIPS 2025] The official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tun…☆37Updated 7 months ago
- ☆129Updated 3 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆30Updated last week
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 3 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆57Updated last year
- Official implement of MIA-DPO☆66Updated 8 months ago
- [ICCV 2025] Dynamic-VLM☆25Updated 9 months ago
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆67Updated 3 weeks ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 6 months ago
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆44Updated 2 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆30Updated 9 months ago
- ☆60Updated last month
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆51Updated 2 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆57Updated 9 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆128Updated 4 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 5 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆48Updated 7 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆28Updated 2 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- ☆31Updated last year
- 🔥🔥🔥 Latest Papers, Codes and Datasets on Video-LMM Post-Training☆52Updated this week
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆48Updated 2 months ago
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model☆17Updated 7 months ago
- Quick Long Video Understanding☆64Updated 3 months ago
- Official implementation of Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents (NeurIPS 2025)☆39Updated 2 weeks ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆17Updated last month