hhaAndroid / awesome-mm-chat
多模态 MM +Chat 合集
☆260Updated last week
Alternatives and similar repositories for awesome-mm-chat:
Users that are interested in awesome-mm-chat are comparing it to the libraries listed below
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆193Updated 11 months ago
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆442Updated last month
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆404Updated 3 weeks ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆239Updated last year
- Efficient Multimodal Large Language Models: A Survey☆343Updated last week
- Research Code for Multimodal-Cognition Team in Ant Group☆143Updated 9 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆161Updated last year
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆120Updated 5 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2, and G☆199Updated last month
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 3 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆333Updated 3 weeks ago
- Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆190Updated last month
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆139Updated 9 months ago
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆90Updated 6 months ago
- Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion☆313Updated last month
- Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface"☆182Updated last month
- A unified evaluation library for multiple machine learning libraries☆262Updated last year
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆144Updated last month
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆220Updated 2 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 7 months ago
- 这是一个DiT-pytorch的代码,主要用于学习DiT结构。☆75Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆261Updated 10 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆343Updated last month
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆203Updated 10 months ago
- ☆116Updated 11 months ago
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆98Updated last year
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆205Updated last month
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆287Updated 2 months ago
- A jounery to real multimodel R1 ! We are doing on large-scale experiment☆297Updated 2 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆312Updated last year