AMAAI-Lab / MuViLinks
Predicting emotion from music videos: exploring the relative contribution of visual and auditory information on affective responses
☆22Updated last year
Alternatives and similar repositories for MuVi
Users that are interested in MuVi are comparing it to the libraries listed below
Sorting:
- Official Implementation of "Multitrack Music Transformer" (ICASSP 2023)☆146Updated last year
- Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021☆14Updated 3 years ago
- The implementation of "Systematic Analysis of Music Representations from BERT"☆23Updated 2 years ago
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆38Updated 2 years ago
- Generates multi-instrument symbolic music (MIDI), based on user-provided emotions from valence-arousal plane.☆64Updated 4 months ago
- Chord-Conditioned Melody Harmonization with Controllable Harmonicity [ICASSP 2023]☆46Updated 2 years ago
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated last year
- Emotional conditioned music generation using transformer-based model.☆155Updated 2 years ago
- IMEMNet Dataset☆19Updated 4 years ago
- ☆10Updated 2 years ago
- Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)☆84Updated 7 months ago
- ☆33Updated last year
- ☆96Updated 3 years ago
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆82Updated 3 years ago
- Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)☆120Updated 7 months ago
- Evaluation metrics for machine-composed symbolic music. Paper: "The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-…☆64Updated 4 years ago
- MusAV: a dataset of relative arousal-valence annotations for validation of audio models☆15Updated 2 years ago
- This is the codes repository for the paper "Emotion-Guided Music Accompaniment Generation based on VAE".☆12Updated last year
- Code accompanying ISMIR 2020 paper - "Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature M…☆52Updated 4 years ago
- This is the official implementation of MusER (AAAI'24).☆29Updated last month
- (Unofficial) Pytorch Implementation of Music Mood Detection Based On Audio And Lyrics With Deep Neural Net☆105Updated 5 years ago
- ☆72Updated 3 years ago
- The repository of the paper: Wang et al., Learning interpretable representation for controllable polyphonic music generation, ISMIR 2020.☆42Updated last year
- Results and Models for Learning Audio Representations of Music Content☆100Updated 7 months ago
- Official implementation of "Learning Music Audio Representations Via Weak Language Supervision" (ICASSP 2022)☆47Updated 7 months ago
- ScorePerformer: Expressive Piano Performance Rendering with Fine-Grained Control (ISMIR 2023)☆40Updated 4 months ago
- ☆23Updated 5 years ago
- SurpriseNet: Melody Harmonization Conditioning on User-controlled Surprise Contours☆28Updated last month
- Z.Wang & G.Xia, MuseBERT: Pre-training of Music Representation for Music Understanding and Controllable Generation, ISMIR 2021☆46Updated 3 years ago
- code for our ACM MM 2020 best paper "PiRhDy: Learning Pitch-, Rhythm-, and Dynamics-aware Embeddings for Symbolic Music"☆32Updated 3 years ago