Yangyi-Chen / SOLOLinks
[TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"
☆146Updated 9 months ago
Alternatives and similar repositories for SOLO
Users that are interested in SOLO are comparing it to the libraries listed below
Sorting:
- Official implementation of the Law of Vision Representation in MLLMs☆163Updated 9 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 9 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆85Updated 11 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆184Updated 2 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆132Updated 3 months ago
- Official repo for StableLLAVA☆95Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆225Updated 4 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆189Updated 10 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆151Updated last year
- Matryoshka Multimodal Models☆113Updated 7 months ago
- ☆69Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆139Updated 2 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆172Updated 10 months ago
- ☆119Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆345Updated 3 weeks ago
- Official repository of MMDU dataset☆93Updated 10 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 10 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated 2 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆238Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆153Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 6 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- Official implement of MIA-DPO☆64Updated 7 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆63Updated 4 months ago
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆113Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆209Updated 7 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆135Updated last year
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆167Updated 5 months ago
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmark☆109Updated last month
- ☆133Updated last year