hhaAndroid / awesome-mm-chatLinks
多模态 MM +Chat 合集
☆268Updated last month
Alternatives and similar repositories for awesome-mm-chat
Users that are interested in awesome-mm-chat are comparing it to the libraries listed below
Sorting:
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆205Updated last year
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆445Updated 2 weeks ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆167Updated last year
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆454Updated 3 months ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆245Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆151Updated last month
- Efficient Multimodal Large Language Models: A Survey☆356Updated last month
- Official repo of Griffon series including v1(ECCV 2024), v2, and G☆219Updated 3 weeks ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆144Updated 11 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆120Updated 7 months ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆287Updated 4 months ago
- New generation of CLIP with fine grained discrimination capability, ICML2025☆200Updated last month
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆102Updated last year
- A unified evaluation library for multiple machine learning libraries☆265Updated last year
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆424Updated last week
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆92Updated 7 months ago
- 包含程序员面试大厂面试题和面试经验☆137Updated last month
- ☆336Updated 4 months ago
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆307Updated this week
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆91Updated 5 months ago
- Collection of image and video datasets for generative AI and multimodal visual AI☆29Updated last year
- ☆82Updated 8 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆173Updated 2 months ago
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆126Updated 5 months ago
- Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆229Updated last month
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆314Updated last year
- 对llava官方代码的一些学习笔记☆25Updated 8 months ago
- [TPAMI reviewing] Towards Visual Grounding: A Survey☆180Updated last week
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆262Updated 11 months ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆88Updated last year