MengLcool / SliMMLinks
☆21Updated 10 months ago
Alternatives and similar repositories for SliMM
Users that are interested in SliMM are comparing it to the libraries listed below
Sorting:
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆101Updated 11 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆54Updated 7 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆107Updated last month
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆32Updated 7 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆51Updated 10 months ago
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆56Updated 6 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆95Updated 10 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆158Updated 11 months ago
- The official implementation of RAR☆92Updated last year
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆244Updated 3 months ago
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆98Updated 3 months ago
- ☆126Updated 8 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 6 months ago
- ☆80Updated last year
- Visual Instruction Tuning for Qwen2 Base Model☆40Updated last year
- ☆133Updated last year
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆135Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆152Updated 2 months ago
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆67Updated last year
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆155Updated 8 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆227Updated last month
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆158Updated last year
- Official repository for CoMM Dataset☆48Updated 10 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆58Updated 5 months ago
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆172Updated 4 months ago
- ☆120Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆357Updated 4 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆209Updated 7 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆134Updated 8 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆126Updated last month