hhaAndroid / awesome-mm-chatLinks
多模态 MM +Chat 合集
☆277Updated last month
Alternatives and similar repositories for awesome-mm-chat
Users that are interested in awesome-mm-chat are comparing it to the libraries listed below
Sorting:
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆246Updated last year
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆252Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆167Updated 2 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆237Updated last month
- Efficient Multimodal Large Language Models: A Survey☆373Updated 5 months ago
- New generation of CLIP with fine grained discrimination capability, ICML2025☆305Updated this week
- [NeurIPS2025 Spotlight 🔥 ] Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Langu…☆220Updated this week
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆101Updated 11 months ago
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆41Updated 2 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆178Updated 2 years ago
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆261Updated 2 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆93Updated 8 months ago
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆659Updated 3 weeks ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆153Updated last month
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆271Updated last month
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆200Updated 6 months ago
- A unified evaluation library for multiple machine learning libraries☆266Updated last year
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆237Updated 7 months ago
- 将SmolVLM2的视觉头与Qwen3-0.6B模型进行了拼接微调☆375Updated 3 weeks ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆72Updated last year
- A curated list of papers and resources related to Described Object Detection, Open-Vocabulary/Open-World Object Detection and Referring E…☆313Updated 2 months ago
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆473Updated 6 months ago
- [CVPR2024] Generative Region-Language Pretraining for Open-Ended Object Detection☆182Updated 6 months ago
- [TPAMI 2025] Towards Visual Grounding: A Survey☆236Updated last month
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆131Updated last month
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆569Updated 11 months ago
- ☆93Updated 2 years ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆342Updated 6 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆203Updated 8 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆225Updated 4 months ago