yagyapandeya / Supervised-Music-Video-Emotion-Classification
The extended and verified music video emotion analysis dataset for data driven algorithm.
☆16Updated 3 years ago
Alternatives and similar repositories for Supervised-Music-Video-Emotion-Classification:
Users that are interested in Supervised-Music-Video-Emotion-Classification are comparing it to the libraries listed below
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated last year
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆75Updated 3 years ago
- IMEMNet Dataset☆16Updated 4 years ago
- The source code of "A Streamlined Encoder/Decoder Architecture for Melody Extraction"☆71Updated 4 years ago
- Implementations for master thesis "Musical Instrument Recognition in Multi-Instrument Audio Contexts" with MedleyDB.☆15Updated 5 years ago
- Semi-supervised learning using teacher-student models for vocal melody extraction☆42Updated 3 years ago
- This repo contains the code to reproduce the paper: "Enriched Music Representations with Multiple Cross-modal Contrastive Learning"☆14Updated last year
- Generates multi-instrument symbolic music (MIDI), based on user-provided emotions from valence-arousal plane.☆63Updated 8 months ago
- Submission to MediaEval 2021 Emotions and Themes in Music challenge. Noisy-student training for music emotion tagging☆11Updated 3 years ago
- Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021☆14Updated 3 years ago
- PMEmo: A Dataset For Music Emotion Computing☆102Updated 9 months ago
- Emotional conditioned music generation using transformer-based model.☆146Updated 2 years ago
- [PyTorch] Minimal codebase for MusicGen models☆52Updated 3 weeks ago
- End-to-end beat and downbeat tracking in the time domain.☆119Updated 3 years ago
- ☆35Updated last year
- This is the official implementation of EmoMusicTV (TMM).☆22Updated last year
- This is the codes repository for the paper "Emotion-Guided Music Accompaniment Generation based on VAE".☆12Updated last year
- Code accompanying ISMIR 2020 paper - "Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature M…☆51Updated 4 years ago
- The repository of the paper: Wang et al., Learning interpretable representation for controllable polyphonic music generation, ISMIR 2020.☆41Updated 10 months ago
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆37Updated last year
- Predicting emotion from music videos: exploring the relative contribution of visual and auditory information on affective responses☆20Updated last year
- ☆93Updated 3 years ago
- MediaEval 2020: Music Mood Classification☆18Updated 3 years ago
- Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)☆111Updated last month
- Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)☆78Updated last month
- implementation of improved musical onset detection with cnn☆52Updated 4 years ago
- ☆28Updated 4 years ago
- ☆72Updated 2 years ago
- Codes for ISMIR 2022 paper: Beat Transformer: Demixed Beat and Downbeat Tracking with Dilated Self-Attention☆100Updated 9 months ago
- PyTorch implementation of ECCV 2020 paper "Foley Music: Learning to Generate Music from Videos "☆39Updated 4 years ago