soham97 / ADIFFLinks
Explaining audio differences using language
β15Updated 6 months ago
Alternatives and similar repositories for ADIFF
Users that are interested in ADIFF are comparing it to the libraries listed below
Sorting:
- β40Updated 4 months ago
- [Official Implementation] Acoustic Autoregressive Modeling π₯β71Updated last year
- Codebase and project page for EDMSoundβ34Updated last year
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025)β28Updated 8 months ago
- β22Updated last week
- small audio language model for reasoningβ74Updated 4 months ago
- Inference codebase for "Cacophony: An Improved Contrastive Audio-Text Model". Preprint: https://arxiv.org/abs/2402.06986β48Updated 10 months ago
- A spoken version of the textual story cloze benchmarkβ18Updated 2 years ago
- Implementation of Multi-Source Music Generation with Latent Diffusion.β26Updated 11 months ago
- β37Updated 5 months ago
- LAFMA: A Latent Flow Matching Model for Text-to-Audio Generation (INTERSPEECH 2024)β39Updated last year
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image β¦β85Updated last year
- Source code for the paper 'Audio Captioning Transformer'β56Updated 3 years ago
- PyTorch implementation of the ICASSP-24 paper: "Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Supervβ¦β38Updated last year
- Official Implementation of EnCLAP (ICASSP 2024)β94Updated last year
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"β29Updated 6 months ago
- [ICASSP2025] Official code for VoiceDiT: Dual-Condition Diffusion Transformer for Environment-Aware Speech Synthesisβ23Updated 4 months ago
- " Music Style Transfer with Time-Varying Inversion of Diffusion Models"β53Updated last year
- β11Updated last year
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cuesβ80Updated 2 months ago
- Implementation of the paper, T-FOLEY: A Controllable Waveform-Domain Diffusion Model for Temporal-Event-Guided Foley Sound Synthesis, acβ¦β33Updated last year
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"β89Updated 9 months ago
- ConsistencyTTA: Accelerating Diffusion-Based Text-to-Audio Generation with Consistency Distillationβ34Updated 9 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformersβ106Updated 3 months ago
- Dataset/code for AudioMarkBench: Benchmarking Robustness of Audio Watermarkingβ40Updated last year
- [InterSpeech'2024] FluentEditor:Text-based Speech Editing by Considering Acoustic and Prosody Consistencyβ55Updated 10 months ago
- Unofficial download repository for MusicCapsβ47Updated 2 years ago
- β42Updated 2 years ago
- Code for the IEEE Signal Processing Letters 2022 paper "UAVM: Towards Unifying Audio and Visual Models".β55Updated 2 years ago
- Pytorch implementation for βV2C: Visual Voice Cloningββ32Updated 2 years ago