sanbuphy / llm-vision-datasets
Collection of image and video datasets for generative AI and multimodal visual AI
☆20Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for llm-vision-datasets
- A paper list of some recent works about Token Compress for Vit and VLM☆149Updated this week
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆77Updated 5 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆123Updated 4 months ago
- Efficient Multimodal Large Language Models: A Survey☆280Updated 3 months ago
- 多模态 MM +Chat 合集☆209Updated 2 weeks ago
- AAAI 2024: Visual Instruction Generation and Correction☆91Updated 9 months ago
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆89Updated 6 months ago
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆58Updated 3 weeks ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆122Updated 4 months ago
- 一些大语言模型和多模态模型的应用,主要包括Rag,小模型,Agent,跨模态搜索,OCR等等☆124Updated 2 weeks ago
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆274Updated last week
- An open-source implementaion for fine-tuning Qwen2-VL series by Alibaba Cloud.☆120Updated 2 weeks ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆236Updated 2 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆489Updated 6 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆277Updated 3 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆148Updated last month
- A collection of visual instruction tuning datasets.☆75Updated 8 months ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆221Updated 9 months ago
- ☆109Updated 5 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆118Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆163Updated 5 months ago
- The official implementation of RAR☆75Updated 7 months ago
- ☆78Updated 9 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆157Updated 4 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆246Updated 4 months ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆175Updated last year
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆290Updated 3 months ago
- Making LLaVA Tiny via MoE-Knowledge Distillation☆63Updated last month
- 【ArXiv】PDF-Wukong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling☆99Updated last month
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆179Updated 5 months ago