SHI-Labs / VCoderLinks
[CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models
☆280Updated last year
Alternatives and similar repositories for VCoder
Users that are interested in VCoder are comparing it to the libraries listed below
Sorting:
- LLaVA-Interactive-Demo☆379Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆315Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆232Updated last year
- [ICML 2025] Official PyTorch implementation of LongVU☆412Updated 6 months ago
- Data release for the ImageInWords (IIW) paper.☆223Updated last year
- ☆189Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆146Updated last month
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆246Updated last year
- [ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces☆237Updated 9 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆144Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆259Updated 3 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆635Updated last year
- ☆201Updated last year
- Multimodal Models in Real World☆551Updated 9 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆390Updated last year
- Long Context Transfer from Language to Vision☆398Updated 8 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆460Updated last year
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆169Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆159Updated last year
- a family of highly capabale yet efficient large multimodal models☆191Updated last year
- ☆180Updated 2 weeks ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆331Updated last year
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆341Updated last year
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆158Updated last year
- Official repository for the paper PLLaVA☆673Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆205Updated 10 months ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆143Updated last year
- Official repo for StableLLAVA☆95Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆231Updated 8 months ago
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆131Updated last year