ModelTC / OmniBalLinks
[ICML 2025] This is the official PyTorch implementation of "OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance".
☆27Updated 7 months ago
Alternatives and similar repositories for OmniBal
Users that are interested in OmniBal are comparing it to the libraries listed below
Sorting:
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆59Updated last year
- [ICCV 2025] Dynamic-VLM☆28Updated last year
- Pruning the VLLMs☆105Updated last year
- [NeurIPS'24] Official implementation of paper "Unveiling the Tapestry of Consistency in Large Vision-Language Models".☆38Updated last year
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆67Updated last year
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆43Updated 10 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆44Updated 9 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆84Updated 3 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆163Updated 4 months ago
- ☆110Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 8 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Updated last year
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆105Updated 7 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆99Updated 2 months ago
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆68Updated 3 weeks ago
- ☆46Updated last year
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆56Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆153Updated 7 months ago
- ☆101Updated 2 years ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- [ICLR'26] Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology☆73Updated 2 weeks ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆122Updated last year
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆72Updated 11 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆42Updated 3 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆155Updated last month
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆95Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers☆34Updated last year
- Code release for VTW (AAAI 2025 Oral)☆64Updated 3 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Sp…☆237Updated last month