HilaManor / AudioEditingCode
☆130Updated last week
Related projects: ⓘ
- Refactored / updated version of `stable-audio-tools` which is an open-source code for audio/music generative models originally by Stabili…☆111Updated last month
- Fine-tune Stable Audio Open with DiT ControlNet.☆155Updated 2 weeks ago
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆140Updated last month
- fine-tuning MusicGen without prompts to generate music with a specific style☆54Updated last year
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆141Updated 5 months ago
- Audio generation using diffusion models, in PyTorch.☆44Updated 11 months ago
- Flexible LoRA Implementation to use with stable-audio-tools☆37Updated last week
- ☆54Updated last month
- Fine-tune your own MusicGen with LoRA☆95Updated 4 months ago
- Official code of the paper: Draw an Audio: Leveraging Multi-Instruction for Video-to-Audio Synthesis.☆21Updated last week
- ☆74Updated 8 months ago
- Trainer for audio-diffusion-pytorch☆127Updated last year
- a notebook containing scripts, documentation, and examples for finetuning musicgen☆74Updated 5 months ago
- ☆106Updated 11 months ago
- The latent diffusion model for text-to-music generation.☆151Updated 7 months ago
- The official implementation of our paper "Instruct-MusicGen: Unlocking Text-to-Music Editing for Music Language Models via Instruction Tu…☆65Updated 2 weeks ago
- Unofficial implementation JEN-1 Composer: A Unified Framework for High-Fidelity Multi-Track Music Generation(https://arxiv.org/abs/2310.1…☆27Updated 8 months ago
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆266Updated 5 months ago
- This is a cog implementation of the fine-tuner for Meta's MusicGen☆46Updated 5 months ago
- ☆154Updated 10 months ago
- Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model☆140Updated last month
- Code for Investigating Personalization Methods in Text to Music Generation☆29Updated 5 months ago
- CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model☆177Updated 4 months ago
- The Song Describer dataset is an evaluation dataset made of ~1.1k captions for 706 permissively licensed music recordings.☆131Updated 8 months ago
- The demo page of UniAudio☆34Updated 7 months ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆148Updated 3 months ago
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆194Updated last year
- The official GitHub page for the survey paper "Foundation Models for Music: A Survey".☆79Updated 2 weeks ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆68Updated 2 months ago
- The official Implementation of PeriodWave and PeriodWave-Turbo☆107Updated last month