Aurora-slz / Synth-EmpathyLinks
Synth-Empathy: Towards High-Quality Synthetic Empathy Data
☆18Updated 10 months ago
Alternatives and similar repositories for Synth-Empathy
Users that are interested in Synth-Empathy are comparing it to the libraries listed below
Sorting:
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆86Updated 3 weeks ago
- MIntRec2.0 is the first large-scale dataset for multimodal intent recognition and out-of-scope detection in multi-party conversations (IC…☆70Updated 5 months ago
- ☆59Updated last year
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆70Updated 8 months ago
- ☆27Updated 8 months ago
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆123Updated 2 months ago
- A comprehensive overview of affective computing research in the era of large language models (LLMs).☆30Updated last year
- Modified LLaVA framework for MOSS2, and makes MOSS2 a multimodal model.☆13Updated last year
- [ACL24] EmoBench: Evaluating the Emotional Intelligence of Large Language Models☆108Updated 8 months ago
- The offical realization of InstructERC☆146Updated 8 months ago
- [ACL 2024] A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset☆19Updated 7 months ago
- This is the repository of our ACL 2024 paper "ESCoT: Towards Interpretable Emotional Support Dialogue Systems".☆36Updated 8 months ago
- The code and weight for LoVA. LoVA is a novel model for Long-form Video-to-Audio generation. Based on the Diffusion Transformer (DiT) arc…☆15Updated 10 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆54Updated 9 months ago
- The first comprehensive multimodal language analysis benchmark for evaluating foundation models☆28Updated 4 months ago
- [EMNLP 2024] ”ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models“☆26Updated last year
- ☆11Updated 5 months ago
- ☆22Updated last year
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆35Updated 9 months ago
- GPT-4V with Emotion☆96Updated 2 years ago
- Explainable Multimodal Emotion Reasoning (EMER), OV-MER (ICML), and AffectGPT (ICML, Oral)☆320Updated 5 months ago
- [ACL 2024] CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling☆209Updated 8 months ago
- Code and data for the paper "Steering Conversational Large Language Models for Long Emotional Support Conversations" along with a UI to v…☆13Updated 9 months ago
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆76Updated last year
- 中文心理健康对话大模型 PsycoLLM☆62Updated 5 months ago
- HumanOmni☆216Updated 10 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆76Updated 10 months ago
- This repository hosts the code, data and model weight of PanoSent.☆59Updated 6 months ago
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆90Updated 11 months ago
- ☆16Updated 6 months ago