zjr2000 / Awesome-Multimodal-ChatbotLinks
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
☆78Updated 2 years ago
Alternatives and similar repositories for Awesome-Multimodal-Chatbot
Users that are interested in Awesome-Multimodal-Chatbot are comparing it to the libraries listed below
Sorting:
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆261Updated 5 months ago
- ☆66Updated 2 years ago
- VideoLLM: Modeling Video Sequence with Large Language Models☆158Updated 2 years ago
- General video interaction platform based on LLMs, including Video ChatGPT☆254Updated 2 years ago
- ☆83Updated 2 years ago
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆232Updated 2 years ago
- a family of highly capabale yet efficient large multimodal models☆191Updated last year
- ☆100Updated last year
- Research Trends in LLM-guided Multimodal Learning.☆356Updated 2 years ago
- Open LLaMA Eyes to See the World☆174Updated 2 years ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆124Updated 7 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆164Updated last year
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scra…☆53Updated 2 years ago
- Official repo for StableLLAVA☆95Updated 2 years ago
- [NeurIPS 2024 D&B] VideoGUI: A Benchmark for GUI Automation from Instructional Videos☆48Updated 6 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆280Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆315Updated last year
- Official Repository of ChatCaptioner☆467Updated 2 years ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆171Updated last year
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆157Updated last year
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆194Updated last year
- ☆191Updated last year
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆237Updated 9 months ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆201Updated 2 years ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆319Updated 11 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆357Updated 11 months ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆104Updated 2 years ago
- ☆70Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆553Updated last year