fangyuan-ksgk / Mini-LLaVALinks
A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.
☆93Updated 6 months ago
Alternatives and similar repositories for Mini-LLaVA
Users that are interested in Mini-LLaVA are comparing it to the libraries listed below
Sorting:
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆208Updated last week
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago
- ☆50Updated 5 months ago
- a family of highly capabale yet efficient large multimodal models☆185Updated 10 months ago
- ☆84Updated 2 weeks ago
- Matryoshka Multimodal Models☆110Updated 5 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated 2 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆91Updated last month
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆158Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 9 months ago
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Updated 8 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆204Updated 5 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆256Updated 6 months ago
- ☆68Updated last year
- ☆142Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆160Updated 9 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆149Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆104Updated 3 weeks ago
- ☆56Updated 7 months ago
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR 2024 Highlight]☆180Updated last month
- ☆179Updated 8 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 4 months ago
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆78Updated last month
- ☆87Updated last year
- ☆58Updated last year
- Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.☆58Updated last year
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆307Updated last week
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 8 months ago
- An open source implementation of CLIP (With TULIP Support)☆157Updated last month
- minimal GRPO implementation from scratch☆90Updated 3 months ago