HilaManor / AudioEditingCodeLinks
☆187Updated 2 months ago
Alternatives and similar repositories for AudioEditingCode
Users that are interested in AudioEditingCode are comparing it to the libraries listed below
Sorting:
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆194Updated last year
- Fine-tune Stable Audio Open with DiT ControlNet.☆249Updated 8 months ago
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆279Updated this week
- Refactored / updated version of `stable-audio-tools` which is an open-source code for audio/music generative models originally by Stabili…☆215Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆200Updated last year
- a text-conditional diffusion probabilistic model capable of generating high fidelity audio.☆188Updated last year
- ☆85Updated last year
- Mustango: Toward Controllable Text-to-Music Generation☆387Updated 8 months ago
- The official implementation of our paper "Instruct-MusicGen: Unlocking Text-to-Music Editing for Music Language Models via Instruction Tu…☆106Updated 2 weeks ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆93Updated 2 years ago
- [ICASSP'24] Investigating Personalization Methods in Text to Music Generation☆45Updated last year
- JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment☆150Updated 5 months ago
- Fine-tune your own MusicGen with LoRA☆156Updated last year
- Flexible LoRA Implementation to use with stable-audio-tools☆79Updated last year
- Awesome music generation model——MG²☆165Updated 10 months ago
- AudioLDM training, finetuning, evaluation and inference.☆294Updated last year
- Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model☆190Updated last year
- The official implementation of the IJCAI 2024 paper "MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models".☆47Updated last year
- ☆114Updated 7 months ago
- Controlled audio inpainting using SD-fine tuned model Riffusion in a ControlNet Architecture☆33Updated 2 years ago
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆124Updated 3 years ago
- Official repository of the paper "MuQ: Self-Supervised Music Representation Learning with Mel Residual Vector Quantization".☆307Updated 5 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆118Updated 8 months ago
- [SOTA] [92% acc] 786M-8k-44L-32H multi-instrumental music transformer with true full MIDI instruments range, efficient encoding, octo-vel…☆88Updated last year
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆343Updated last year
- The latent diffusion model for text-to-music generation.☆183Updated 2 years ago
- The Song Describer dataset is an evaluation dataset made of ~1.1k captions for 706 permissively licensed music recordings.☆167Updated 2 years ago
- CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages [ACL 2025]☆217Updated 8 months ago
- Pytorch implementation of SoundCTM☆100Updated 10 months ago
- ACM MM 2023 CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model☆211Updated last year