gpt4video / GPT4VideoLinks
Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation
☆142Updated 10 months ago
Alternatives and similar repositories for GPT4Video
Users that are interested in GPT4Video are comparing it to the libraries listed below
Sorting:
- ☆183Updated last month
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆166Updated last year
- ☆191Updated last year
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆278Updated last year
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆152Updated 9 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 10 months ago
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆141Updated 7 months ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆163Updated 11 months ago
- Official repo for StableLLAVA☆94Updated last year
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆155Updated 7 months ago
- ☆74Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆229Updated last year
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆126Updated 10 months ago
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆129Updated last year
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆193Updated last year
- Long Context Transfer from Language to Vision☆392Updated 5 months ago
- ☆155Updated 7 months ago
- ☆78Updated 6 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆240Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆90Updated 10 months ago
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundi…☆51Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆227Updated 5 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last week
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆153Updated last year
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆129Updated 9 months ago
- Code release for our NeurIPS 2024 Spotlight paper "GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing"☆147Updated 10 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆115Updated last year
- An initiative to replicate Sora☆103Updated last year
- Artistic Vision-Language Understanding with Adapter-enhanced MiniGPT-4☆27Updated 2 years ago