yuntaoshou / Awesome-Emotion-ReasoningLinks
Awesome-Emotion-Reasoning is a collection of Emotion-Reasoning works, including papers, codes and datasets
☆48Updated last week
Alternatives and similar repositories for Awesome-Emotion-Reasoning
Users that are interested in Awesome-Emotion-Reasoning are comparing it to the libraries listed below
Sorting:
- Official repository for the paper “MME-Emotion: A Holistic Evaluation Benchmark for Emotional Intelligence in Multimodal Large Language M…☆14Updated 2 months ago
- Released code and datas for「Multi-modal Stance Detection: New Datasets and Model」in ACL2024.☆26Updated last year
- A comprehensive overview of affective computing research in the era of large language models (LLMs).☆27Updated last year
- A multimodal context reasoning approach that introduce the multi-view semantic alignment information via prefix tuning.☆16Updated 2 years ago
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆55Updated 5 months ago
- A curated list of works related to Misinformation Video Detection, as a companion material for an ACM Multimedia 2023 survey☆122Updated last month
- This repository hosts the code, data and model weight of PanoSent.☆56Updated 3 months ago
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆33Updated last year
- Code for ACM MM 2024 paper "A Picture Is Worth a Graph: A Blueprint Debate Paradigm for Multimodal Reasoning"☆20Updated 10 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆61Updated last year
- ☆84Updated last year
- EmoLLM: Multimodal Emotional Understanding Meets Large Language Models☆19Updated last year
- ☆58Updated last year
- A paper list about diffusion models for natural language processing.☆182Updated 2 years ago
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆87Updated 9 months ago
- SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection☆73Updated last year
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆37Updated 9 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆54Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆44Updated 4 months ago
- ☆58Updated last year
- 学术双语简历模板,涵盖教育背景、论文发表、项目经历、竞赛经历和个人陈述等关键部分,可适用于申请研究生项目、学术职位或相关行业岗位。☆145Updated 4 months ago
- 😎 All your need for future is FollowGPT.☆13Updated last year
- ☆12Updated 3 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆163Updated last month
- ☆23Updated 6 months ago
- [ACM MM 2022 Oral] This is the official implementation of "SER30K: A Large-Scale Dataset for Sticker Emotion Recognition"☆27Updated 3 years ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆151Updated last year
- Official Repository for "FakingRecipe: Detecting Fake News on Short Video Platforms from the Perspective of Creative Process", ACM MM 202…☆54Updated 3 weeks ago
- Official implementation of Towards Multi-Modal Sarcasm Detection via Hierarchical Congruity Modeling with Knowledge Enhancement.☆41Updated last year
- code for "CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language Models"☆19Updated 7 months ago