SparksJoe / Prism
A Framework for Decoupling and Assessing the Capabilities of VLMs
☆38Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for Prism
- Video dataset dedicated to portrait-mode video recognition.☆38Updated 7 months ago
- ☆45Updated last year
- Official implement of MIA-DPO☆41Updated 2 weeks ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆62Updated last month
- ☆74Updated 8 months ago
- Official repo for StableLLAVA☆91Updated 11 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆102Updated last month
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆140Updated 2 weeks ago
- ☆35Updated 3 months ago
- ☆35Updated 5 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆31Updated 4 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆55Updated last month
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆49Updated 2 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆116Updated last week
- Touchstone: Evaluating Vision-Language Models by Language Models☆78Updated 10 months ago
- Official repository of MMDU dataset☆75Updated last month
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆38Updated 7 months ago
- The official code of the paper "PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction".☆45Updated 3 weeks ago
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆83Updated 4 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆179Updated last month
- Explore the Limits of Omni-modal Pretraining at Scale☆89Updated 2 months ago
- 🔥 Aurora Series: A more efficient multimodal large language model series for video.☆47Updated last week
- This repo contains the code and data for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks"☆77Updated this week
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆33Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆55Updated last month
- ☆17Updated last year
- LMM which strictly superset LLM embedded☆31Updated 2 weeks ago
- ☆69Updated 6 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆51Updated last month
- ☆58Updated 9 months ago