[TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.
☆234Dec 22, 2023Updated 2 years ago
Alternatives and similar repositories for UnIVAL
Users that are interested in UnIVAL are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Emu Series: Generative Multimodal Models from BAAI☆1,772Jan 12, 2026Updated 2 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆862May 8, 2025Updated 10 months ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Oct 27, 2023Updated 2 years ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,136Jun 4, 2024Updated last year
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆46Jun 9, 2025Updated 9 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆665Oct 22, 2024Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆506Aug 9, 2024Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆99May 3, 2024Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆262Aug 5, 2025Updated 7 months ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Nov 9, 2023Updated 2 years ago
- An open-source framework for training large multimodal models.☆4,079Aug 31, 2024Updated last year
- An Open-source Toolkit for LLM Development☆2,806Jan 13, 2025Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆689Jan 29, 2025Updated last year
- ☆101May 16, 2024Updated last year
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆61Jun 12, 2023Updated 2 years ago
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration☆1,592Jan 1, 2025Updated last year
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model☆17Feb 13, 2025Updated last year
- Meta-Transformer for Unified Multimodal Learning☆1,654Dec 5, 2023Updated 2 years ago
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,499Aug 5, 2025Updated 7 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆360Dec 18, 2023Updated 2 years ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆19Nov 3, 2024Updated last year
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆297May 20, 2024Updated last year
- Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model☆3,619May 13, 2025Updated 10 months ago
- ☆1,710Sep 27, 2024Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆106Mar 14, 2024Updated 2 years ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,307Jul 15, 2025Updated 8 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,923May 26, 2025Updated 9 months ago
- ☆80Nov 24, 2024Updated last year
- ☆13Jul 20, 2024Updated last year
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆29Jan 1, 2024Updated 2 years ago
- GPU-accelerated video decoder☆20May 18, 2021Updated 4 years ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆260Apr 14, 2024Updated last year
- Long Context Transfer from Language to Vision☆402Mar 18, 2025Updated last year
- Strong and Open Vision Language Assistant for Mobile Devices☆1,345Apr 15, 2024Updated last year
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,826Nov 27, 2025Updated 3 months ago
- 【CVPRW'23】First Place Solution to the CVPR'2023 AQTC Challenge☆15Jul 18, 2023Updated 2 years ago
- ImageBind One Embedding Space to Bind Them All☆8,992Nov 21, 2025Updated 4 months ago
- 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing imp…☆3,344Mar 5, 2024Updated 2 years ago