AMAAI-Lab / MuViLinks
Predicting emotion from music videos: exploring the relative contribution of visual and auditory information on affective responses
☆21Updated last year
Alternatives and similar repositories for MuVi
Users that are interested in MuVi are comparing it to the libraries listed below
Sorting:
- Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021☆14Updated 3 years ago
- Official Implementation of "Multitrack Music Transformer" (ICASSP 2023)☆147Updated last year
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆83Updated 3 years ago
- Emotional conditioned music generation using transformer-based model.☆159Updated 2 years ago
- The implementation of "Systematic Analysis of Music Representations from BERT"☆23Updated 2 years ago
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated last year
- Generates multi-instrument symbolic music (MIDI), based on user-provided emotions from valence-arousal plane.☆65Updated 5 months ago
- ☆96Updated 3 years ago
- ☆72Updated 3 years ago
- ☆33Updated last year
- Official implementation of "Learning Music Audio Representations Via Weak Language Supervision" (ICASSP 2022)☆47Updated 8 months ago
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆38Updated 2 years ago
- Evaluation metrics for machine-composed symbolic music. Paper: "The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-…☆64Updated 4 years ago
- Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)☆120Updated 8 months ago
- Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)☆84Updated 8 months ago
- Code accompanying ISMIR 2020 paper - "Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature M…☆52Updated 4 years ago
- This is the codes repository for the paper "Emotion-Guided Music Accompaniment Generation based on VAE".☆12Updated last year
- The repository of the paper: Wang et al., Learning interpretable representation for controllable polyphonic music generation, ISMIR 2020.☆42Updated last year
- A minimum JukeMIR branch for feature extraction.☆32Updated 3 years ago
- IMEMNet Dataset☆19Updated 4 years ago
- The official implementation of Theme Transformer. A Theme-based music generation. IEEE TMM☆125Updated 2 years ago
- MusAV: a dataset of relative arousal-valence annotations for validation of audio models☆15Updated 2 years ago
- Official Implementation of Jointist☆36Updated 2 years ago
- Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls☆82Updated last year
- Chord-Conditioned Melody Harmonization with Controllable Harmonicity [ICASSP 2023]☆46Updated 2 years ago
- (Unofficial) Pytorch Implementation of Music Mood Detection Based On Audio And Lyrics With Deep Neural Net☆105Updated 5 years ago
- Results and Models for Learning Audio Representations of Music Content☆100Updated 8 months ago
- ☆10Updated 2 years ago
- ☆35Updated 2 years ago
- Semi-supervised learning using teacher-student models for vocal melody extraction☆42Updated 3 years ago