yagyapandeya / Supervised-Music-Video-Emotion-ClassificationLinks
The extended and verified music video emotion analysis dataset for data driven algorithm.
☆16Updated 3 years ago
Alternatives and similar repositories for Supervised-Music-Video-Emotion-Classification
Users that are interested in Supervised-Music-Video-Emotion-Classification are comparing it to the libraries listed below
Sorting:
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated last year
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆82Updated 3 years ago
- ☆11Updated last month
- IMEMNet Dataset☆19Updated 4 years ago
- Submission to MediaEval 2021 Emotions and Themes in Music challenge. Noisy-student training for music emotion tagging☆11Updated 3 years ago
- ☆72Updated 3 years ago
- The source code of "A Streamlined Encoder/Decoder Architecture for Melody Extraction"☆73Updated 5 years ago
- Emotional conditioned music generation using transformer-based model.☆154Updated 2 years ago
- End-to-end beat and downbeat tracking in the time domain.☆122Updated 3 years ago
- Controlling a LSTM to generate music with given sentiment (positive or negative).☆37Updated 3 years ago
- ☆36Updated 2 years ago
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆38Updated last year
- Semi-supervised learning using teacher-student models for vocal melody extraction☆42Updated 3 years ago
- This repo contains the code to reproduce the paper: "Enriched Music Representations with Multiple Cross-modal Contrastive Learning"☆15Updated 2 years ago
- The official implementation of "TONet: Tone-Octave Network for Singing Melody Extraction from Polyphonic Music"☆41Updated 2 years ago
- Generates multi-instrument symbolic music (MIDI), based on user-provided emotions from valence-arousal plane.☆64Updated 3 months ago
- The repository of the paper: Wang et al., Learning interpretable representation for controllable polyphonic music generation, ISMIR 2020.☆42Updated last year
- Implementations for master thesis "Musical Instrument Recognition in Multi-Instrument Audio Contexts" with MedleyDB.☆15Updated 6 years ago
- (Unofficial) Pytorch Implementation of Music Mood Detection Based On Audio And Lyrics With Deep Neural Net☆105Updated 5 years ago
- Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021☆14Updated 3 years ago
- ☆96Updated 3 years ago
- This is the official implementation of EmoMusicTV (TMM).☆23Updated last year
- Audio Embeddings as Teachers for Music Classification☆13Updated last year
- MediaEval 2020: Music Mood Classification☆18Updated 4 years ago
- ☆71Updated 2 weeks ago
- Supplementary material for the ISMIR 2020 paper: “Deconstruct, Analyse, Reconstruct: how to improve tempo, beat, and downbeat estimation”…☆11Updated 4 years ago
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆54Updated 4 years ago
- Code accompanying ISMIR 2020 paper - "Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature M…☆52Updated 4 years ago
- Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)☆84Updated 6 months ago
- This is the codes repository for the paper "Emotion-Guided Music Accompaniment Generation based on VAE".☆12Updated last year