Yangyi-Chen / SOLOLinks
[TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"
☆148Updated 11 months ago
Alternatives and similar repositories for SOLO
Users that are interested in SOLO are comparing it to the libraries listed below
Sorting:
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated 2 weeks ago
- Official repo for StableLLAVA☆94Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆59Updated 11 months ago
- ☆94Updated 9 months ago
- ☆76Updated last year
- ☆119Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆87Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆230Updated 7 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 8 months ago
- Matryoshka Multimodal Models☆112Updated 9 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆191Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 4 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 9 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆67Updated 6 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 9 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆136Updated last week
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆157Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 8 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆74Updated last year
- ☆133Updated last year
- ☆101Updated 4 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆177Updated last year
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆154Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆157Updated 4 months ago
- Official implement of MIA-DPO☆66Updated 9 months ago
- A RLHF Infrastructure for Vision-Language Models☆184Updated 11 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 10 months ago