mathigatti / MusicVideoMakerLinks
Automatically generate a music video by extracting scenes from another video
β31Updated last year
Alternatives and similar repositories for MusicVideoMaker
Users that are interested in MusicVideoMaker are comparing it to the libraries listed below
Sorting:
- [DEPRECEATED] Morpheus Music AI implementation spin-off :)β17Updated 2 years ago
- Detect individual instruments activity in an audio file. π€πΉπΈπ₯β16Updated 3 years ago
- [DEAD/NOT SUPPORTED ANYMORE] This is the only fully working and functioning version of Google Magenta Piano Transformer Colab Notebook.β25Updated 2 years ago
- GPT3-based Multi-Instrumental MIDI Music AI Implementationβ48Updated 3 years ago
- OpenAI's GPT2 based Music AI Google Colab Notebooks for Music Generation/Composition and Capabilities Evaluationβ45Updated 4 years ago
- SOTA Piano Transformer model trained on 4.2GB of Solo Piano MIDI musicβ25Updated last year
- Wave-U-Net for automatic (drum) mixingβ38Updated 2 years ago
- Automatic DJ-mixing of tracksβ33Updated 5 years ago
- Python implementation of the "Shazam" algorithmβ51Updated 6 years ago
- Do you think that AI can write songs for us? The project is just the music generator with the power of AI.β36Updated 6 years ago
- Full LAKH MIDI dataset converted to MuseNet MIDI output format (9 instruments + drums)β18Updated 3 years ago
- β14Updated last year
- Fork of AudioLDM as a TuneFlow pluginβ41Updated 2 years ago
- Music Generative Pretrained Transformerβ27Updated 2 years ago
- An application of vocal melody extraction.β58Updated 5 years ago
- Music Generation in MIDI format using Deep Learning.β17Updated last year
- Prepare spectrograms from audio for training a Riffusion modelβ15Updated 2 years ago
- OpenAI MuseNet API Colab Notebookβ33Updated 2 years ago
- SurpriseNet: Melody Harmonization Conditioning on User-controlled Surprise Contoursβ28Updated last month
- Unofficial implementation JEN-1 Composer: A Unified Framework for High-Fidelity Multi-Track Music Generation(https://arxiv.org/abs/2310.1β¦β31Updated last year
- Implementation of the framework described in the paper Spectrogram Inpainting for Interactive Generation of Instrument Sounds published aβ¦β40Updated 2 years ago
- Word2Wave: a framework for generating short audio samples from a text prompt using WaveGAN and COALA.β119Updated 3 years ago
- β11Updated last year
- Models and datasets for training deep learning automatic mixing modelsβ101Updated 9 months ago
- SOTA kilo-scale MIDI dataset for MIR and Music AI purposesβ58Updated last year
- Here we will track the latest Audio AI Agent, including speech, music, sound effects, etc.β15Updated last year
- Code for "Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features" arXiv:2110.08862, 2021.β24Updated 3 years ago
- Conditional lyrics generator -> pre-trained GPT2 model fine-tuned on lyrics with features dataset.β40Updated 5 years ago
- [Exclusive for GitHub] deep-muse: Advanced Text-to-Music Generator Implementationβ16Updated 3 years ago
- [DEPRECIATED] [PyTorch 2.0] [638M] [85.33% acc] Full-attention multi-instrumental music transformer for supervised music generation, optiβ¦β31Updated last year