archinetai / audio-ai-timeline
A timeline of the latest AI models for audio generation, starting in 2023!
☆1,899Updated last year
Alternatives and similar repositories for audio-ai-timeline:
Users that are interested in audio-ai-timeline are comparing it to the libraries listed below
- Audio generation using diffusion models, in PyTorch.☆2,040Updated last year
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,629Updated 4 months ago
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,533Updated 3 months ago
- Tools to train a generative model on arbitrary audio samples☆1,097Updated last year
- Fast Infinite Waveform Music Generation☆674Updated 2 years ago
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,247Updated last year
- Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.☆756Updated 7 months ago
- Community list of startups working with AI in audio and music technology☆1,647Updated 3 months ago
- Contrastive Language-Audio Pretraining☆1,631Updated this week
- A straightforward collection of Music Generation research resources.☆601Updated 3 months ago
- Stable diffusion for real-time music generation (web app)☆2,653Updated 9 months ago
- Stable diffusion for real-time music generation☆3,666Updated 9 months ago
- Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder☆1,486Updated last week
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆543Updated last year
- Official implementation of "Separate Anything You Describe"☆1,725Updated 5 months ago
- State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.☆3,673Updated last year
- a list of demo websites for automatic music generation research☆698Updated last week
- Audio Dataset for training CLAP and other models☆679Updated last year
- Text-to-Audio/Music Generation☆2,416Updated 7 months ago
- MIDI / symbolic music tokenizers for Deep Learning models 🎶☆772Updated last week
- Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch☆1,494Updated last week
- Official PyTorch implementation of BigVGAN (ICLR 2023)☆1,015Updated 8 months ago
- ☆392Updated 3 months ago
- A family of diffusion models for text-to-audio generation.☆1,160Updated 4 months ago
- DeepAFx-ST - Style transfer of audio effects with differentiable signal processing. Please see https://csteinmetz1.github.io/DeepAFx-ST/☆385Updated last year
- List of academic resources on Multimodal ML for Music☆295Updated 2 years ago
- Convert any music library into a music production sample-library with ML☆1,533Updated 8 months ago
- Implementation of Meta-Voicebox : The first generative AI model for speech to generalize across tasks with state-of-the-art performance.☆581Updated last year
- Implementation of Natural Speech 2, Zero-shot Speech and Singing Synthesizer, in Pytorch☆1,318Updated last year
- Collection of audio-focused loss functions in PyTorch☆774Updated 9 months ago