pkunlp-icler / FastV
[ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
☆419Updated 4 months ago
Alternatives and similar repositories for FastV:
Users that are interested in FastV are comparing it to the libraries listed below
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆205Updated 8 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆127Updated 11 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆307Updated 4 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆376Updated 2 weeks ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆144Updated last month
- A RLHF Infrastructure for Vision-Language Models☆173Updated 5 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆154Updated 5 months ago
- A paper list of some recent works about Token Compress for Vit and VLM☆441Updated last week
- Official repository for VisionZip (CVPR 2025)☆275Updated 2 months ago
- A jounery to real multimodel R1 ! We are doing on large-scale experiment☆297Updated 2 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆276Updated 7 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆299Updated 3 months ago
- ☆328Updated last year
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆176Updated 7 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆261Updated 10 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆332Updated 8 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆237Updated 8 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆338Updated 3 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆319Updated 9 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆281Updated 5 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆273Updated 7 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆358Updated 2 months ago
- Long Context Transfer from Language to Vision☆374Updated last month
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆343Updated last month
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆169Updated 2 weeks ago
- Efficient Multimodal Large Language Models: A Survey☆343Updated last week
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆204Updated last month
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆540Updated 3 weeks ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆162Updated 6 months ago
- ☆188Updated 10 months ago