opendatalab / laion5b-downloader
☆108Updated last year
Alternatives and similar repositories for laion5b-downloader:
Users that are interested in laion5b-downloader are comparing it to the libraries listed below
- AAAI 2024: Visual Instruction Generation and Correction☆92Updated last year
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆223Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆340Updated last month
- ☆87Updated 9 months ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆180Updated last year
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆214Updated 9 months ago
- ☆143Updated 3 months ago
- ☆67Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆138Updated 9 months ago
- ☆133Updated last year
- Multimodal Models in Real World☆493Updated 2 months ago
- Official repository of MMDU dataset☆89Updated 6 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆205Updated last month
- The HD-VG-130M Dataset☆117Updated last year
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆230Updated last month
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆328Updated 5 months ago
- [NeurIPS 2023] Customize spatial layouts for conditional image synthesis models, e.g., ControlNet, using GPT☆136Updated 11 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆123Updated 5 months ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆208Updated last year
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆238Updated last year
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆135Updated 3 months ago
- ☆161Updated 9 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆142Updated 9 months ago
- Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆182Updated 3 weeks ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 3 months ago
- ☆182Updated 9 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆219Updated 2 months ago
- ☆171Updated last year
- Unified Multi-modal IAA Baseline and Benchmark☆75Updated 6 months ago
- [ECCV2024] Towards Reliable Advertising Image Generation Using Human Feedback☆49Updated 5 months ago