akashe / Multimodal-action-recognitionLinks
Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.
☆73Updated 4 years ago
Alternatives and similar repositories for Multimodal-action-recognition
Users that are interested in Multimodal-action-recognition are comparing it to the libraries listed below
Sorting:
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆89Updated 4 years ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆124Updated 2 years ago
- Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences☆42Updated 2 years ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆53Updated 3 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆115Updated 3 years ago
- ☆16Updated 4 years ago
- Self-Supervised Learning by Cross-Modal Audio-Video Clustering (NeurIPS 2020)☆91Updated 3 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆31Updated 4 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆81Updated 2 years ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆55Updated 2 years ago
- ☆212Updated 3 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆82Updated 4 years ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆33Updated 11 months ago
- ☆70Updated 4 years ago
- Using VideoBERT to tackle video prediction☆131Updated 4 years ago
- PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.☆87Updated 4 years ago
- PyTorch implementation of Multi-modal Dense Video Captioning (CVPR 2020 Workshops)☆143Updated 2 years ago
- Code for Discriminative Sounding Objects Localization (NeurIPS 2020)☆59Updated 3 years ago
- ☆31Updated 4 years ago
- This repository contains the code for our CVPR 2022 paper on "Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and …☆40Updated 2 years ago
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆64Updated 3 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆125Updated 8 months ago
- The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-languag…☆230Updated 3 years ago
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆125Updated 5 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆30Updated 5 years ago
- Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing☆18Updated 3 years ago
- Cross-model active contrastive coding☆22Updated 4 years ago
- CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021☆64Updated 3 years ago
- Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification☆135Updated 4 years ago
- Official implementation of AdaMML. https://arxiv.org/abs/2105.05165.☆51Updated 3 years ago