hhaAndroid / awesome-mm-chat
多模态 MM +Chat 合集
☆247Updated 3 weeks ago
Alternatives and similar repositories for awesome-mm-chat:
Users that are interested in awesome-mm-chat are comparing it to the libraries listed below
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆149Updated 10 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆151Updated last year
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆300Updated 3 weeks ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆237Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆139Updated 8 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆114Updated 4 months ago
- A unified evaluation library for multiple machine learning libraries☆261Updated 11 months ago
- Official PyTorch implementation of "Multi-modal Queried Object Detection in the Wild" (accepted by NeurIPS 2023)☆291Updated last year
- Efficient Multimodal Large Language Models: A Survey☆325Updated last week
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆421Updated this week
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆308Updated 10 months ago
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆79Updated 4 months ago
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆116Updated last month
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated last month
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆98Updated last month
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆499Updated 4 months ago
- [TPAMI reviewing] Towards Visual Grounding: A Survey☆111Updated last month
- Collection of image and video datasets for generative AI and multimodal visual AI☆22Updated 10 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆213Updated last month
- PromptDet: Towards Open-vocabulary Detection using Uncurated Images, ECCV2022☆164Updated 2 years ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆202Updated 2 months ago
- A DETR-style framework for open-vocabulary detection (OVD). CVPR 2023☆184Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆132Updated 7 months ago
- A paper list of some recent works about Token Compress for Vit and VLM☆364Updated this week
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆153Updated 5 months ago
- Fine tuning grounding Dino☆89Updated 2 months ago
- Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion☆297Updated this week