yudhik11 / MultimodalMusicRecsys
Official Repository of "Multimodal Fusion Based Attentive Networks for Sequential Music Recommendation" accepted in BIGMM 2021
☆14Updated 2 years ago
Alternatives and similar repositories for MultimodalMusicRecsys:
Users that are interested in MultimodalMusicRecsys are comparing it to the libraries listed below
- Sentiment analysis of song lyrics compared to auditory track features and valence☆12Updated 2 years ago
- New Last.fm Dataset 2020 for music auto-tagging purposes.☆31Updated last year
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆38Updated last year
- ☆88Updated 2 years ago
- Official code for the paper "Towards developing a Multi Modal Video Recommendation system"☆15Updated 2 years ago
- PyTorch Implementation of Introducing Self-Attention to Target Attentive Graph Neural Networks (AISP '22)☆26Updated 3 years ago
- This is the Github repository containing the code for the Context-Aware Sequential Recommendation project for the Information Retrieval 2…☆11Updated 2 years ago
- Code for the paper "Bilateral Variational Autoencoder for Collaborative Filtering", WSDM'21☆34Updated 2 years ago
- Multi-domain Recommendation with Adapter Tuning☆30Updated last year
- Experiments with multimodal deep learning models based on transformers☆12Updated 2 years ago
- Official Repository of "Transformer-based approach towards music emotion recognition from lyrics" accepted in ECIR 2021☆42Updated 4 years ago
- PersEmoN: A Deep Network for Joint Analysis of Apparent Personality, Emotion and Their Relationship☆12Updated 5 years ago
- Monitor Chrome Browsing to detect levels of Depression☆18Updated 5 years ago
- Implementation of the paper "Speech emotion recognition with deep convolutional neural networks" by Dias Issa Et al.☆12Updated 3 years ago
- Human Emotion Understanding using multimodal dataset.☆97Updated 4 years ago
- Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.☆73Updated 3 years ago
- ☆27Updated 3 years ago
- Official source code for the paper: "It’s Just a Matter of Time: Detecting Depression with Time-Enriched Multimodal Transformers"☆54Updated last year
- Emo-CLIM: Emotion-Aligned Contrastive Learning Between Images and Music [ICASSP 2024]☆13Updated last year
- Source codes for paper "MM-Rec: Visiolinguistic Model Empowered Multimodal News Recommendation".☆22Updated 2 years ago
- ☆15Updated 2 years ago
- [ECIR 2024] Official repository for the paper titled "Self Contrastive Learning for Session-based Recommendation"☆21Updated last year
- Official code for paper "Attention Calibration for Transformer-based Sequential Recommendation"☆15Updated last year
- Submission to MediaEval 2021 Emotions and Themes in Music challenge. Noisy-student training for music emotion tagging☆11Updated 3 years ago
- Multimodal short video classification task, integrating video, image, audio and text modes for short video classification☆19Updated 5 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆102Updated 2 years ago
- The source code is for the paper: “Plug-in Diffusion Model for Sequential Recommendation” accepted in AAAI 2024 by Haokai Ma, Ruobing Xie…☆25Updated last year
- ☆20Updated 2 years ago
- [Official Codes] Experiments on Generalizability of User-Oriented Fairness in Recommender Systems (SIGIR 2022)☆32Updated 2 years ago
- Multimodal classification solution for the SIGIR eCOM using Co-attention and transformer language models☆19Updated 4 years ago