kaleido-lab / dolphin
General video interaction platform based on LLMs, including Video ChatGPT
☆251Updated last year
Alternatives and similar repositories for dolphin:
Users that are interested in dolphin are comparing it to the libraries listed below
- Official Repository of ChatCaptioner☆463Updated last year
- VideoLLM: Modeling Video Sequence with Large Language Models☆156Updated last year
- official implementation of VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning (COLM 2024)☆169Updated 7 months ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆354Updated last year
- Relate Anything Model is capable of taking an image as input and utilizing SAM to identify the corresponding mask within the image.☆454Updated last year
- ☆65Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆427Updated 3 months ago
- [SIGGRAPH Asia 2023] An interactive story visualization tool that support multiple characters☆261Updated last year
- ☆167Updated last year
- Retrieval-Augmented Video Generation for Telling a Story☆256Updated last year
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆295Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆401Updated 8 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆276Updated 11 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆191Updated 8 months ago
- ☆82Updated last year
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆271Updated last year
- [NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".☆339Updated last month
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆150Updated 4 months ago
- BindDiffusion: One Diffusion Model to Bind Them All☆166Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆333Updated 2 months ago
- LLaVA-Interactive-Demo☆367Updated 8 months ago
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆292Updated 8 months ago
- Code for Text2Performer. Paper: Text2Performer: Text-Driven Human Video Generation☆327Updated last year
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆147Updated 10 months ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆121Updated 4 months ago
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆191Updated last year
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts☆322Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆256Updated last year
- The official repository of "Video assistant towards large language model makes everything easy"☆221Updated 3 months ago
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal …☆359Updated last year