JUNJIE99 / VISTA_Evaluation_FineTuning
Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original code and model can be accessed at FlagEmbedding.
☆30Updated 3 months ago
Alternatives and similar repositories for VISTA_Evaluation_FineTuning:
Users that are interested in VISTA_Evaluation_FineTuning are comparing it to the libraries listed below
- ☆94Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆81Updated 3 weeks ago
- ☆23Updated 4 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆57Updated 3 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated 2 months ago
- ☆61Updated 8 months ago
- ☆59Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 7 months ago
- Official repository of MMDU dataset☆83Updated 4 months ago
- ☆33Updated 7 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆63Updated 4 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆126Updated 4 months ago
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆52Updated 2 months ago
- ☆20Updated 11 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆77Updated 7 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆44Updated 3 months ago
- A collection of visual instruction tuning datasets.☆76Updated 11 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆29Updated 2 months ago
- ☆24Updated 9 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆43Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆17Updated last month
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆23Updated last month
- ☆89Updated last year
- ☆22Updated 6 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated last year
- ☆73Updated 11 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆18Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 4 months ago