yudhik11 / MER-lyrics-TransformerLinks
Official Repository of "Transformer-based approach towards music emotion recognition from lyrics" accepted in ECIR 2021
☆42Updated 4 years ago
Alternatives and similar repositories for MER-lyrics-Transformer
Users that are interested in MER-lyrics-Transformer are comparing it to the libraries listed below
Sorting:
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆82Updated 3 years ago
- (Unofficial) Pytorch Implementation of Music Mood Detection Based On Audio And Lyrics With Deep Neural Net☆105Updated 5 years ago
- PMEmo: A Dataset For Music Emotion Computing☆109Updated last year
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆38Updated last year
- Master thesis on Music Emotion Recognition☆18Updated 5 years ago
- Controlling a LSTM to generate music with given sentiment (positive or negative).☆37Updated 3 years ago
- Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021☆14Updated 3 years ago
- This repository collects information about different data sets for Music Emotion Recognition.☆245Updated 2 years ago
- Emotional conditioned music generation using transformer-based model.☆154Updated 2 years ago
- ☆96Updated 3 years ago
- Generates multi-instrument symbolic music (MIDI), based on user-provided emotions from valence-arousal plane.☆64Updated 3 months ago
- ☆28Updated 5 years ago
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated last year
- MusAV: a dataset of relative arousal-valence annotations for validation of audio models☆15Updated 2 years ago
- Dataset of piano arrangements of video game soundtracks labelled according to sentiment.☆70Updated 2 years ago
- Results and Models for Learning Audio Representations of Music Content☆100Updated 6 months ago
- Companion code for ISMIR 2017 paper "Deep Salience Representations for $F_0$ Estimation in Polyphonic Music"☆93Updated 5 years ago
- The repository of the paper: Wang et al., Learning interpretable representation for controllable polyphonic music generation, ISMIR 2020.☆42Updated last year
- Accompanying code for our ISMIR 2020 paper on mood estimation.☆33Updated 3 years ago
- Music is a medium to express emotion. According to literature, music emotion can be quantified continuously as valence and arousal (VA) d…☆9Updated 5 years ago
- The source code of "A Streamlined Encoder/Decoder Architecture for Melody Extraction"☆73Updated 5 years ago
- Self-supervised VQ-VAE for One-Shot Music Style Transfer☆95Updated 4 months ago
- ☆122Updated 5 years ago
- ☆36Updated 2 years ago
- A minimum JukeMIR branch for feature extraction.☆32Updated 3 years ago
- Semi-supervised learning using teacher-student models for vocal melody extraction☆42Updated 3 years ago
- Code accompanying ISMIR 2020 paper - "Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature M…☆52Updated 4 years ago
- ☆37Updated 5 years ago
- music genre classification : LSTM vs Transformer☆61Updated 2 years ago
- Algorithm and Data for paper "Automatic Detection of Hierarchical Structure and Influence of Structure on Melody, Harmony and Rhythm in P…☆96Updated 2 years ago