SALT-NLP / LLaVAR
Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"
☆252Updated 3 months ago
Related projects: ⓘ
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆138Updated last week
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆298Updated last month
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆303Updated 2 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆415Updated 8 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆252Updated 3 weeks ago
- Long Context Transfer from Language to Vision☆293Updated 3 weeks ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning"☆157Updated last week
- a family of highly capabale yet efficient large multimodal models☆155Updated 3 weeks ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆237Updated 2 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆202Updated last month
- Aligning LMMs with Factually Augmented RLHF☆302Updated 10 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆183Updated 2 months ago
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆246Updated 3 weeks ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆189Updated 6 months ago
- [ECCV 2024] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Mo…☆207Updated last month
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆224Updated 8 months ago
- ☆145Updated 2 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆183Updated 3 weeks ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆496Updated 2 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆557Updated 5 months ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆163Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆378Updated 5 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆235Updated 8 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆321Updated 9 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, qwen-vl, phi3-v …☆123Updated this week
- EVE: Encoder-Free Vision-Language Models☆206Updated last month
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆218Updated last week
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆244Updated 6 months ago
- When do we not need larger vision models?☆314Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆273Updated 2 months ago