GaiZhenbiao / Phi3V-Finetuning
Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.
☆58Updated 9 months ago
Alternatives and similar repositories for Phi3V-Finetuning:
Users that are interested in Phi3V-Finetuning are comparing it to the libraries listed below
- This project is a collection of fine-tuning scripts to help researchers fine-tune Qwen 2 VL on HuggingFace datasets.☆65Updated 6 months ago
- a family of highly capabale yet efficient large multimodal models☆179Updated 7 months ago
- OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation☆72Updated 3 weeks ago
- This is the repo for the paper "PANGEA: A FULLY OPEN MULTILINGUAL MULTIMODAL LLM FOR 39 LANGUAGES"☆105Updated 4 months ago
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆92Updated 2 weeks ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last week
- Chat with Phi 3.5/3 Vision LLMs. Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which includ…☆33Updated 3 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆91Updated 3 months ago
- InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions (AAAI2024)☆160Updated 10 months ago
- ☆68Updated 9 months ago
- ☆145Updated 11 months ago
- Quick exploration into fine tuning florence 2☆307Updated 6 months ago
- [EMNLP 2024] Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagati…☆96Updated 9 months ago
- ☆56Updated 4 months ago
- ☆29Updated 7 months ago
- ☆132Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆84Updated 2 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆239Updated 3 months ago
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Updated 5 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated last month
- A new novel multi-modality (Vision) RAG architecture☆25Updated 6 months ago
- ☆73Updated last year
- ☆57Updated 9 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆69Updated 2 weeks ago
- Code for KaLM-Embedding models☆74Updated 3 weeks ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆149Updated 3 weeks ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆52Updated 5 months ago
- Official repository for RAGViz: Diagnose and Visualize Retrieval-Augmented Generation [EMNLP 2024]☆82Updated 2 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆78Updated 5 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 7 months ago