Yangyi-Chen / SOLOLinks
[TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"
☆146Updated 9 months ago
Alternatives and similar repositories for SOLO
Users that are interested in SOLO are comparing it to the libraries listed below
Sorting:
- Official implementation of the Law of Vision Representation in MLLMs☆165Updated 9 months ago
- ☆119Updated last year
- ☆74Updated last year
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆189Updated 2 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated 11 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 10 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆175Updated 10 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆86Updated 11 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆194Updated 11 months ago
- Matryoshka Multimodal Models☆112Updated 7 months ago
- ☆133Updated last year
- ☆88Updated 8 months ago
- Official repo for StableLLAVA☆94Updated last year
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆132Updated 4 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆226Updated 5 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆153Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 7 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆153Updated 9 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 6 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 8 months ago
- Official implement of MIA-DPO☆65Updated 7 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 8 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆152Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆143Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆106Updated 3 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆169Updated 5 months ago
- ☆153Updated 10 months ago
- ☆91Updated last year