pkmital / time-domain-neural-audio-style-transferLinks
NIPS2017 "Time Domain Neural Audio Style Transfer" code repository
☆139Updated 3 years ago
Alternatives and similar repositories for time-domain-neural-audio-style-transfer
Users that are interested in time-domain-neural-audio-style-transfer are comparing it to the libraries listed below
Sorting:
- SpecGAN - generate audio with adversarial training☆113Updated 7 years ago
- LSTM to generate drum tracks based on Metallica's midi drum tracks☆107Updated 6 years ago
- TensorFlow implementation of a VAE for encoding spectrograms☆69Updated 7 years ago
- Torch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model☆157Updated 8 years ago
- PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model☆294Updated 2 years ago
- Tensorflow implementation of SampleRNN☆143Updated 5 years ago
- Symbol-to-Instrument Neural Generator☆159Updated 4 years ago
- Music INSTrument dataset☆62Updated 9 years ago
- TiFGAN: Time Frequency Generative Adversarial Networks☆120Updated 3 years ago
- Tensorflow implementation of the models used in "End-to-end learning for music audio tagging at scale"☆152Updated 6 years ago
- A Universal Music Translation Network Implementation☆27Updated 7 years ago
- Vector Quantized Contrastive Predictive Coding for Template-based Music Generation☆82Updated 2 years ago
- Code for “Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation”☆58Updated 3 years ago
- Vocode spectrograms to audio with generative adversarial networks☆63Updated 6 years ago
- Singing Voice Separation via Recurrent Inference and Skip-Filtering Connections - PyTorch Implementation. Demo:☆171Updated 7 years ago
- Auralisation of learned features in CNN (for audio)☆42Updated 8 years ago
- Code for paper submission under review.☆35Updated 8 years ago
- Supplementary material of "Deep Unsupervised Drum Transcription", ISMIR 2019☆133Updated last year
- Deep dreams on audio spectrograms get resynthesized for deep-learning generated audio effects.☆30Updated 8 years ago
- ☆64Updated 6 years ago
- Human Voice Wave Samples☆83Updated 10 years ago
- Progressively Growing GAN in PyTorch for Image and Sound generation☆113Updated 7 years ago
- Code for creating a dataset of MIDI ground truth☆168Updated 6 years ago
- ☆75Updated 5 years ago
- The code for the MaD TwinNet. Demo page:☆112Updated 2 years ago
- train and generate melody for pop music with recurrent neural networks☆96Updated 7 years ago
- ☆110Updated 8 years ago
- Deep Convolutional Networks on the Pitch Spiral for Musical Instrument Recognition☆41Updated 9 years ago
- Generating birdsong with WaveNet☆29Updated 7 years ago
- Hierarchical fast and high-fidelity audio generation☆77Updated last year