LaVi-Lab / AIM
Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"
☆21Updated 3 months ago
Alternatives and similar repositories for AIM:
Users that are interested in AIM are comparing it to the libraries listed below
- FreeVA: Offline MLLM as Training-Free Video Assistant☆57Updated 9 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆27Updated 8 months ago
- 【NeurIPS 2024】The official code of paper "Automated Multi-level Preference for MLLMs"☆19Updated 5 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 3 months ago
- ☆24Updated 10 months ago
- ☆21Updated 4 months ago
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆34Updated last month
- ☆29Updated 7 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆45Updated last week
- ☆31Updated 8 months ago
- ☆18Updated 8 months ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆68Updated 9 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆56Updated 6 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆29Updated 4 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 5 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆44Updated 8 months ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆20Updated 2 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆72Updated last month
- Official implement of MIA-DPO☆54Updated 2 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆31Updated 3 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆35Updated 5 months ago
- OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding? [CVPR 2025]☆38Updated 3 weeks ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆65Updated 3 weeks ago
- ☆39Updated 4 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆48Updated last month
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆90Updated 7 months ago