Yangyi-Chen / SOLO
[TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"
☆127Updated 3 months ago
Alternatives and similar repositories for SOLO:
Users that are interested in SOLO are comparing it to the libraries listed below
- Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆128Updated last month
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆115Updated 7 months ago
- ☆132Updated last year
- Official implementation of the Law of Vision Representation in MLLMs☆149Updated 2 months ago
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆95Updated 7 months ago
- Official repo for StableLLAVA☆94Updated last year
- Matryoshka Multimodal Models☆96Updated 3 weeks ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆85Updated 5 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆165Updated 4 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆197Updated this week
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆290Updated this week
- ☆137Updated 3 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆156Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆86Updated last month
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆71Updated 2 weeks ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆139Updated 8 months ago
- ☆110Updated 6 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆63Updated 8 months ago
- ☆47Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆44Updated 3 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆32Updated 6 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆138Updated 6 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆189Updated last month
- This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆128Updated 8 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆96Updated 5 months ago
- LVBench: An Extreme Long Video Understanding Benchmark☆80Updated 5 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆115Updated 9 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆98Updated last week
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆66Updated 2 months ago