HumanMLLM / Omni-Emotion
☆18Updated 4 months ago
Alternatives and similar repositories for Omni-Emotion
Users that are interested in Omni-Emotion are comparing it to the libraries listed below
Sorting:
- GPT-4V with Emotion☆92Updated last year
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆53Updated 8 months ago
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆31Updated 3 weeks ago
- Explainable Multimodal Emotion Reasoning (EMER), Open-vocabulary MER (OV-MER), and AffectGPT☆166Updated last week
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆14Updated last month
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆49Updated this week
- ☆21Updated 2 weeks ago
- TCL-MAP is a powerful method for multimodal intent recognition (AAAI 2024)☆42Updated last year
- [CVPR 2024 Highlight] Official implementation of the paper: Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-…☆39Updated 3 weeks ago
- [Information Fusion 2024] HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition☆106Updated 6 months ago
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆33Updated 11 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated 9 months ago
- ☆28Updated this week
- MIntRec2.0 is the first large-scale dataset for multimodal intent recognition and out-of-scope detection in multi-party conversations (IC…☆47Updated 2 weeks ago
- A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)☆20Updated 6 months ago
- ☆11Updated 2 months ago
- Towards Long Form Audio-visual Video Understanding☆15Updated 2 weeks ago
- A collection of omni-mllm☆28Updated this week
- ☆21Updated last month
- av-SALMONN: Speech-Enhanced Audio-Visual Large Language Models☆13Updated last year
- ☆20Updated last month
- LUCY: Linguistic Understanding and Control Yielding Early Stage of Her☆38Updated last month
- ☆28Updated 7 months ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆48Updated 2 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆24Updated 4 months ago
- Official PyTorch implementation for "MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with Minimal Multimodal Speech Tokens…☆25Updated last month
- The code repo for ICASSP 2023 Paper "MMCosine: Multi-Modal Cosine Loss Towards Balanced Audio-Visual Fine-Grained Learning"☆19Updated last year
- code for A Large-scale Dataset for Audio-Language Representation Learning☆13Updated 7 months ago
- HumanOmni☆161Updated 2 months ago
- [ACL 2024] A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset☆14Updated last month