linkAmy / IMEMNet
IMEMNet Dataset
☆14Updated 3 years ago
Related projects: ⓘ
- PyTorch implementation of ECCV 2020 paper "Foley Music: Learning to Generate Music from Videos "☆41Updated 3 years ago
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆65Updated 5 months ago
- MIDI, WAV domain music emotion recognition [ISMIR 2021]☆69Updated 2 years ago
- Code for reproducing the experiments and results of "Multi-Source Contrastive Learning from Musical Audio", accepted for publication in S…☆17Updated 10 months ago
- ☆11Updated 11 months ago
- Emotional conditioned music generation using transformer-based model.☆138Updated last year
- ☆50Updated 3 years ago
- Music Audio Representation Benchmark for Universal Evaluation☆84Updated 4 months ago
- Source code for the paper 'Audio Captioning Transformer'☆47Updated 2 years ago
- A dataset for Audio-Visual Sound Event Detection in Movies☆25Updated last year
- This package aims at simplifying the download of the AudioCaps dataset.☆29Updated 9 months ago
- Pytorch implementation for “V2C: Visual Voice Cloning”☆30Updated last year
- ☆35Updated last year
- Z.Wang & G.Xia, MuseBERT: Pre-training of Music Representation for Music Understanding and Controllable Generation, ISMIR 2021☆43Updated 2 years ago
- Code for the IEEE Signal Processing Letters 2022 paper "UAVM: Towards Unifying Audio and Visual Models".☆55Updated last year
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆49Updated 3 years ago
- Evaluation metrics for machine-composed symbolic music. Paper: "The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-…☆60Updated 3 years ago
- Unofficial PyTorch implementation of Masked Autoencoders that Listen☆61Updated 2 years ago
- This is the official implementation of EmoMusicTV (TMM).☆17Updated 8 months ago
- Implementation of our paper 'On Metric Learning For Audio-Text Cross-Modal Retrieval'☆41Updated 2 years ago
- ☆68Updated 2 years ago
- The repository of the paper: Wang et al., Learning interpretable representation for controllable polyphonic music generation, ISMIR 2020.☆40Updated 5 months ago
- The goal of this task is to automatically recognize the emotions and themes conveyed in a music recording using machine learning algorith…☆37Updated last year
- Official Implementation of "Multitrack Music Transformer" (ICASSP 2023)☆133Updated 6 months ago
- The dataset and baseline code for Text-to-Audio Grounding (TAG)☆37Updated last month
- PMEmo: A Dataset For Music Emotion Computing☆91Updated 5 months ago
- official code for CVPR'24 paper Diff-BGM☆38Updated 5 months ago
- "Joint Detection and Classification of Singing Voice Melody Using Convolutional Recurrent Neural Networks"☆119Updated 4 years ago
- ☆43Updated last year
- code for our ACM MM 2020 best paper "PiRhDy: Learning Pitch-, Rhythm-, and Dynamics-aware Embeddings for Symbolic Music"☆30Updated 2 years ago