pkunlp-icler / FastV
[ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
☆364Updated last month
Alternatives and similar repositories for FastV:
Users that are interested in FastV are comparing it to the libraries listed below
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆116Updated 9 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆255Updated 7 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆149Updated 3 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆183Updated 5 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆165Updated 4 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆156Updated 4 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆328Updated last month
- A paper list of some recent works about Token Compress for Vit and VLM☆324Updated last week
- Long Context Transfer from Language to Vision☆360Updated 3 months ago
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆309Updated 3 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆139Updated last week
- A RLHF Infrastructure for Vision-Language Models☆162Updated 3 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆364Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆189Updated last month
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆295Updated last week
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆234Updated last month
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆311Updated 5 months ago
- Efficient Multimodal Large Language Models: A Survey☆314Updated 6 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆198Updated this week
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆265Updated 5 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆227Updated 3 weeks ago
- ☆308Updated last year
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆374Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆313Updated 7 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆270Updated 11 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆227Updated 6 months ago
- Official repo for "VisionZip: Longer is Better but Not Necessary in Vision Language Models"☆235Updated last month
- The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A su…☆223Updated last month