yudhik11 / MER-lyrics-Transformer
Official Repository of "Transformer-based approach towards music emotion recognition from lyrics" accepted in ECIR 2021
☆42Updated 4 years ago
Alternatives and similar repositories for MER-lyrics-Transformer:
Users that are interested in MER-lyrics-Transformer are comparing it to the libraries listed below
- (Unofficial) Pytorch Implementation of Music Mood Detection Based On Audio And Lyrics With Deep Neural Net☆104Updated 5 years ago
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆80Updated 3 years ago
- Master thesis on Music Emotion Recognition☆18Updated 5 years ago
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆38Updated last year
- Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021☆14Updated 3 years ago
- PMEmo: A Dataset For Music Emotion Computing☆106Updated last year
- ☆95Updated 3 years ago
- music genre classification : LSTM vs Transformer☆61Updated 2 years ago
- Generates multi-instrument symbolic music (MIDI), based on user-provided emotions from valence-arousal plane.☆64Updated last month
- Controlling a LSTM to generate music with given sentiment (positive or negative).☆38Updated 3 years ago
- This repository collects information about different data sets for Music Emotion Recognition.☆240Updated 2 years ago
- Music is a medium to express emotion. According to literature, music emotion can be quantified continuously as valence and arousal (VA) d…☆9Updated 5 years ago
- Semi-supervised learning using teacher-student models for vocal melody extraction☆42Updated 3 years ago
- Emotional conditioned music generation using transformer-based model.☆149Updated 2 years ago
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated last year
- ☆37Updated 4 years ago
- MediaEval 2020: Music Mood Classification☆18Updated 4 years ago
- Predicting emotion from music videos: exploring the relative contribution of visual and auditory information on affective responses☆22Updated last year
- This is the codes repository for the paper "Emotion-Guided Music Accompaniment Generation based on VAE".☆12Updated last year
- Code accompanying ISMIR 2020 paper - "Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature M…☆52Updated 4 years ago
- The source code of "A Streamlined Encoder/Decoder Architecture for Melody Extraction"☆73Updated 5 years ago
- Code accompanying the paper: An Attention Mechanism for Musical Instrument Recognition. ISMIR 2019☆24Updated 5 years ago
- ☆14Updated 4 years ago
- The repository of the paper: Wang et al., Learning interpretable representation for controllable polyphonic music generation, ISMIR 2020.☆42Updated last year
- Introducing multi-channel U-Net for Music Source Separation trained using weighted multi-task loss.☆32Updated 2 years ago
- A PyTorch Implementation of the paper - Choi, Woosung, et al. "Investigating u-nets with various intermediate blocks for spectrogram-base…☆79Updated 2 years ago
- "Joint Detection and Classification of Singing Voice Melody Using Convolutional Recurrent Neural Networks"☆124Updated 5 years ago
- ☆107Updated 4 years ago
- Chord-Conditioned Melody Transformer☆36Updated 3 years ago
- MusAV: a dataset of relative arousal-valence annotations for validation of audio models☆15Updated 2 years ago