TCL-MAP is a powerful method for multimodal intent recognition (AAAI 2024)
☆56Jan 25, 2024Updated 2 years ago
Alternatives and similar repositories for TCL-MAP
Users that are interested in TCL-MAP are comparing it to the libraries listed below
Sorting:
- MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)☆128May 2, 2025Updated 10 months ago
- The first comprehensive multimodal language analysis benchmark for evaluating foundation models☆28Sep 22, 2025Updated 5 months ago
- ☆17Jun 11, 2024Updated last year
- Multimodal Classification and Out-of-distribution Detection☆18Apr 4, 2025Updated 10 months ago
- MMER☆14Jan 8, 2026Updated last month
- [ICASSP2024] Code for paper "SDIF-DA: A Shallow-to-Deep Interaction Framework with Data Augmentation for Multi-modal Intent Detection"☆15Jul 6, 2024Updated last year
- ☆80Dec 4, 2024Updated last year
- The PyTorch code for paper: "CONSK-GCN: Conversational Semantic- and Knowledge-Oriented Graph Convolutional Network for Multimodal Emotio…☆13Oct 21, 2022Updated 3 years ago
- ☆17Mar 21, 2024Updated last year
- Source code of paper: "MCANet: Shared-weight-based MultiheadCrossAttention network for drug-target interaction prediction"☆16Apr 10, 2023Updated 2 years ago
- ☆19Jun 4, 2024Updated last year
- Code for paper "MIR-GAN: Refining Frame-Level Modality-Invariant Representations with Adversarial Network for Audio-Visual Speech Recogni…☆16Jun 21, 2023Updated 2 years ago
- Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)☆78Aug 29, 2025Updated 6 months ago
- TupleInfoNCE ICCV21☆17Jul 22, 2022Updated 3 years ago
- Code for paper "Cross-Modal Global Interaction and Local Alignment for Audio-Visual Speech Recognition"☆19Jun 21, 2023Updated 2 years ago
- Modality-Invariant Temporal Representation Learning☆22Apr 21, 2023Updated 2 years ago
- This is the official code for NeurIPS 2023 paper "Learning Unseen Modality Interaction"☆18Jan 22, 2024Updated 2 years ago
- [AAAI24] Official implement of <Beyond Prototypes: Semantic Anchor Regularization for Better Representation Learning>☆23Jan 31, 2024Updated 2 years ago
- FRAME-LEVEL EMOTIONAL STATE ALIGNMENT METHOD FOR SPEECH EMOTION RECOGNITION☆23Dec 22, 2024Updated last year
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆59Nov 5, 2024Updated last year
- Implementation of the paper "Multimodal Transformer With Learnable Frontend and Self Attention for Emotion Recognition" submitted to ICAS…☆28Oct 22, 2021Updated 4 years ago
- PyTorch implementation of "Deep Speech 2: End-to-End Speech Recognition in English and Mandarin" (ICML, 2016)☆26Mar 5, 2021Updated 4 years ago
- ☆27Apr 29, 2025Updated 10 months ago
- [ICCV2023] The repo for "Boosting Multi-modal Model Performance with Adaptive Gradient Modulation".☆28Jan 26, 2024Updated 2 years ago
- Official PyTorch implementation for "MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with Minimal Multimodal Speech Tokens…☆46Jun 12, 2025Updated 8 months ago
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆31Mar 7, 2024Updated last year
- Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis (ALMT)☆138Apr 9, 2025Updated 10 months ago
- [AAAI 2025] Official PyTorch implementation of the paper "Bridging the Gap for Test-Time Multimodal Sentiment Analysis"☆47Feb 21, 2025Updated last year
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆39Apr 20, 2025Updated 10 months ago
- Y-Agent Studio 是一个面向 企业级应用 的Agent开发套,Y-Agent是其中的核心模块。 包含了:支持智能体编排、RAG、流程日志、单元测试、流程测试、语料生产等垂直领域非常需要的功能。 智能体编排可以在同一个流程中,同时支持多智能体协作和流程混合编排…☆25Oct 4, 2025Updated 4 months ago
- ☆71Jul 25, 2024Updated last year
- ICCV 2021☆34May 11, 2022Updated 3 years ago
- ☆40Apr 16, 2024Updated last year
- A multimodal fine-grained correlation fusion network with attention mechanisms for visual-textual sentiment analysis☆10Jan 13, 2024Updated 2 years ago
- TEXTOIR is the first opensource toolkit for text open intent recognition. (ACL 2021)☆243Nov 26, 2025Updated 3 months ago
- [ICLR 2024]Implementation of "Prototypical Information Bottlenecking and Disentangling for Multimodal Cancer Survival Prediction"☆78May 5, 2024Updated last year
- [Findings of NAACL 2024] Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in Conversation☆39Nov 23, 2024Updated last year
- PS-Mixer: A Polar-Vector and Strength-Vector Mixer Model for Multimodal Sentiment Analysis☆34Apr 10, 2023Updated 2 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆81Mar 12, 2024Updated last year