akashe / Multimodal-action-recognition
Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.
☆69Updated 3 years ago
Alternatives and similar repositories for Multimodal-action-recognition:
Users that are interested in Multimodal-action-recognition are comparing it to the libraries listed below
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆85Updated 3 years ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆121Updated last year
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆56Updated 2 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆30Updated 4 years ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆43Updated last year
- ☆15Updated 4 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval". CVPR 2022☆98Updated 2 years ago
- Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences☆42Updated last year
- This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment An…☆68Updated last year
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆26Updated last month
- Vision Transformers are Parameter-Efficient Audio-Visual Learners☆95Updated last year
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆80Updated 3 years ago
- The code repo for ICASSP 2023 Paper "MMCosine: Multi-Modal Cosine Loss Towards Balanced Audio-Visual Fine-Grained Learning"☆18Updated last year
- MUSIC-AVQA, CVPR2022 (ORAL)☆72Updated 2 years ago
- ☆66Updated 3 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆76Updated last year
- Official code repo for TCLR: Temporal Contrastive Learning for Video Representation [CVIU-2022]☆35Updated 10 months ago
- Self-Supervised Learning by Cross-Modal Audio-Video Clustering (NeurIPS 2020)☆90Updated 2 years ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆50Updated 2 years ago
- CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021☆59Updated 2 years ago
- ☆31Updated 3 years ago
- This is the official implementation of 2023 ICCV paper "EmoSet: A large-scale visual emotion dataset with rich attributes".☆41Updated 9 months ago
- This repository contains the code for our CVPR 2022 paper on "Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and …☆35Updated 2 years ago
- ☆54Updated 2 years ago
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆109Updated 4 years ago
- PyTorch implementation of the models described in the IEEE ICASSP 2022 paper "Is cross-attention preferable to self-attention for multi-m…☆57Updated 2 years ago
- My implementation for the paper Context-Aware Emotion Recognition Networks☆25Updated 2 years ago
- ☆14Updated 3 years ago
- Code for the paper: Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation.☆30Updated last year
- PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.☆86Updated 3 years ago