ByteDance-Seed / SAILLinks
Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"
☆67Updated last month
Alternatives and similar repositories for SAIL
Users that are interested in SAIL are comparing it to the libraries listed below
Sorting:
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 3 months ago
- ☆90Updated 4 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆160Updated last month
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆129Updated 4 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆87Updated last year
- ☆75Updated 4 months ago
- ☆130Updated last week
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆39Updated 8 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆75Updated 3 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆90Updated 7 months ago
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆71Updated last month
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆61Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆157Updated 10 months ago
- [NeurIPS 2025] The official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tun…☆37Updated 8 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆48Updated 4 months ago
- PyTorch implementation of "UNIT: Unifying Image and Text Recognition in One Vision Encoder", NeurlPS 2024.☆30Updated last year
- ☆119Updated last year
- Empowering Unified MLLM with Multi-granular Visual Generation☆130Updated 9 months ago
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆111Updated 3 weeks ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆32Updated last year
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆84Updated 2 months ago
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆158Updated 3 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆58Updated last month
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆87Updated 2 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 10 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆70Updated last year
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆90Updated 2 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆66Updated 3 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 8 months ago