OpenGVLab / Ask-Anything
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
☆3,158Updated 3 weeks ago
Alternatives and similar repositories for Ask-Anything:
Users that are interested in Ask-Anything are comparing it to the libraries listed below
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆2,908Updated 8 months ago
- An open-source framework for training large multimodal models.☆3,813Updated 5 months ago
- Multimodal-GPT☆1,488Updated last year
- Instruction Tuning with GPT-4☆4,261Updated last year
- InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBin…☆3,211Updated 5 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,806Updated 10 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,411Updated 2 weeks ago
- Let ChatGPT teach your own chatbot in hours with a single GPU!☆3,165Updated 10 months ago
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,706Updated last year
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆780Updated last year
- ☆764Updated 6 months ago
- Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model☆3,411Updated 3 months ago
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,286Updated 5 months ago
- An Open-source Toolkit for LLM Development☆2,755Updated 3 weeks ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,477Updated 5 months ago
- ☆1,684Updated 4 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,676Updated 4 months ago
- Open-source and strong foundation image recognition models.☆3,060Updated 6 months ago
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration☆1,534Updated last month
- ImageBind One Embedding Space to Bind Them All☆8,502Updated 6 months ago
- Official repo for MM-REACT☆941Updated last year
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,397Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,547Updated 2 months ago
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,357Updated 11 months ago
- [A toolbox for fun.] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆801Updated last year
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,134Updated 2 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆765Updated last year
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,675Updated 7 months ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆505Updated last year
- 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing imp…☆3,228Updated 11 months ago