yagyapandeya / Supervised-Music-Video-Emotion-ClassificationLinks
The extended and verified music video emotion analysis dataset for data driven algorithm.
☆18Updated 4 years ago
Alternatives and similar repositories for Supervised-Music-Video-Emotion-Classification
Users that are interested in Supervised-Music-Video-Emotion-Classification are comparing it to the libraries listed below
Sorting:
- ☆11Updated 9 months ago
- End-to-end beat and downbeat tracking in the time domain.☆124Updated 4 years ago
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated 2 years ago
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆87Updated 4 years ago
- Emotional conditioned music generation using transformer-based model.☆167Updated 3 years ago
- ☆73Updated 3 years ago
- Toward Universal Text-to-Music-Retrieval (TTMR) [ICASSP23]☆114Updated 2 years ago
- Timbre transfer with variational autoencoding and cycle-consistent adversarial networks. Able to transfer the timbre of an audio source t…☆68Updated 4 years ago
- The source code of "A Streamlined Encoder/Decoder Architecture for Melody Extraction"☆73Updated 6 years ago
- chorus detection for pop music☆46Updated 3 years ago
- Semi-supervised learning using teacher-student models for vocal melody extraction☆43Updated 4 years ago
- Submission to MediaEval 2021 Emotions and Themes in Music challenge. Noisy-student training for music emotion tagging☆11Updated 4 years ago
- "Joint Detection and Classification of Singing Voice Melody Using Convolutional Recurrent Neural Networks"☆131Updated 6 years ago
- Predicting emotion from music videos: exploring the relative contribution of visual and auditory information on affective responses☆22Updated 2 years ago
- (Unofficial) Pytorch Implementation of Music Mood Detection Based On Audio And Lyrics With Deep Neural Net☆112Updated 6 years ago
- Source code for models described in the paper "ESResNe(X)t-fbsp: Learning Robust Time-Frequency Transformation of Audio" (https://arxiv.o…☆47Updated 4 years ago
- PyTorch implementation of MuseMorphose (published at IEEE/ACM TASLP), a Transformer-based model for music style transfer.☆193Updated 3 years ago
- "Pop Music Highlighter: Marking the Emotion Keypoints", TISMIR vol. 1, no. 1☆114Updated 7 years ago
- Generates multi-instrument symbolic music (MIDI), based on user-provided emotions from valence-arousal plane.☆65Updated 11 months ago
- ISMIR 2020 Paper repo: Music SketchNet: Controllable Music Generation via Factorized Representations of Pitch and Rhythm☆82Updated 2 years ago
- Companion code for ISMIR 2017 paper "Deep Salience Representations for $F_0$ Estimation in Polyphonic Music"☆93Updated 6 years ago
- Official pytorch implementation of the paper: "Catch-A-Waveform: Learning to Generate Audio from a Single Short Example" (NeurIPS 2021)☆191Updated last year
- Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)☆122Updated last year
- Supplementary material for the ISMIR 2020 paper: “Deconstruct, Analyse, Reconstruct: how to improve tempo, beat, and downbeat estimation”…☆11Updated 4 years ago
- Code for "Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features" arXiv:2110.08862, 2021.☆26Updated 4 years ago
- ☆58Updated 5 years ago
- Official Repository of "Transformer-based approach towards music emotion recognition from lyrics" accepted in ECIR 2021☆42Updated 4 years ago
- Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021☆15Updated 4 years ago
- Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)☆85Updated last year
- implementation of improved musical onset detection with cnn☆56Updated 5 years ago