ByteDance-Seed / SAILLinks
Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"
☆76Updated last month
Alternatives and similar repositories for SAIL
Users that are interested in SAIL are comparing it to the libraries listed below
Sorting:
- ☆95Updated 6 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆64Updated 5 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 9 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆137Updated 6 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆184Updated last week
- ☆79Updated 6 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆83Updated 5 months ago
- ☆140Updated 2 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆76Updated last year
- ☆62Updated 3 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆51Updated 6 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆108Updated last month
- PyTorch implementation of NEPA☆70Updated this week
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆158Updated last year
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆52Updated 5 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆40Updated 10 months ago
- Official implementation of "Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence"☆127Updated last week
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 10 months ago
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆117Updated 2 months ago
- Multimodal RewardBench☆55Updated 10 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆92Updated last year
- Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe☆129Updated last week
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆73Updated last year
- ☆124Updated last year
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆165Updated 3 weeks ago
- [ICCV 2025] Dynamic-VLM☆26Updated last year
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆73Updated 3 months ago
- [NeurIPS 2025] Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations☆192Updated 3 months ago
- Official implementation of Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents (NeurIPS 2025)☆43Updated last month
- [Preprint] GMem: A Modular Approach for Ultra-Efficient Generative Models☆40Updated 9 months ago