zjr2000 / Awesome-Multimodal-ChatbotLinks
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
☆78Updated 2 years ago
Alternatives and similar repositories for Awesome-Multimodal-Chatbot
Users that are interested in Awesome-Multimodal-Chatbot are comparing it to the libraries listed below
Sorting:
- ☆66Updated 2 years ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated last week
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated 2 months ago
- a family of highly capabale yet efficient large multimodal models☆187Updated 11 months ago
- VideoLLM: Modeling Video Sequence with Large Language Models☆158Updated last year
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆92Updated last year
- General video interaction platform based on LLMs, including Video ChatGPT☆252Updated 2 years ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆193Updated last year
- Official repo for StableLLAVA☆95Updated last year
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆51Updated last year
- ☆73Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 5 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆151Updated last year
- [NeurIPS 2024 D&B] VideoGUI: A Benchmark for GUI Automation from Instructional Videos☆43Updated 2 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆225Updated 4 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆278Updated last year
- ☆99Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆307Updated 6 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆161Updated 7 months ago
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆228Updated last year
- Research Trends in LLM-guided Multimodal Learning.☆357Updated last year
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆90Updated 2 years ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆144Updated 11 months ago
- [ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models☆86Updated 11 months ago
- Open LLaMA Eyes to See the World☆174Updated 2 years ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆88Updated 9 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆492Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆23Updated 2 weeks ago
- ☆69Updated last year