Tencent / VITALinks
The official implement of VITA, VITA15, LongVITA, and VITA-Audio.
☆34Updated last month
Alternatives and similar repositories for VITA
Users that are interested in VITA are comparing it to the libraries listed below
Sorting:
- ☆170Updated 6 months ago
- ☆55Updated 2 months ago
- ☆35Updated last week
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆66Updated 5 months ago
- ☆78Updated 4 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 4 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- A Foundation Model for Industrial Signal Comprehensive Representation☆36Updated 3 weeks ago
- Explore how to get a VQ-VAE models efficiently!☆51Updated last month
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆56Updated last week
- The official implement of Freeze-Omni.☆13Updated last month
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆96Updated 2 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆28Updated 8 months ago
- ☆78Updated 5 months ago
- ☆32Updated 4 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆38Updated 2 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆43Updated 5 months ago
- SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems☆83Updated last year
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆53Updated 3 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated last year
- ☆30Updated 3 months ago
- ☆21Updated 7 months ago
- The official GitHub page for the survey paper "Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey". And this paper is unde…☆55Updated 3 weeks ago
- HumanOmni☆194Updated 5 months ago
- 🤗 R1-AQA Model: mispeech/r1-aqa☆296Updated 5 months ago
- An easy-to-use, fast, and easily integrable tool for evaluating audio LLM☆135Updated last month
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 7 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆84Updated last month