NVIDIA / audio-flamingoLinks
PyTorch implementation of Audio Flamingo: Series of Advanced Audio Understanding Language Models
☆758Updated last month
Alternatives and similar repositories for audio-flamingo
Users that are interested in audio-flamingo are comparing it to the libraries listed below
Sorting:
- Unified automatic quality assessment for speech, music, and sound.☆613Updated 4 months ago
- A family of state-of-the-art Transformer-based audio codecs for low-bitrate high-quality audio coding.☆397Updated last month
- Multi-Scale Neural Audio Codec (SNAC) compresses audio into discrete codes at a low bitrate☆691Updated 11 months ago
- This is the code for the SpeechTokenizer presented in the SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models. Samples a…☆612Updated last year
- The Open Source Code of UniAudio☆578Updated last year
- Official repository of the paper "MuQ: Self-Supervised Music Representation Learning with Mel Residual Vector Quantization".☆264Updated 2 months ago
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".☆458Updated last year
- Daily tracking of awesome audio papers, including music generation, zero-shot tts, asr, audio generation☆403Updated last month
- Implementation of E2-TTS, "Embarrassingly Easy Fully Non-Autoregressive Zero-Shot TTS", in Pytorch☆505Updated 7 months ago
- Audio Large Language Models☆750Updated 3 months ago
- Codec for paper: LLaSA: Scaling Train-time and Inference-time Compute for LLaMA-based Speech Synthesis☆318Updated 3 months ago
- Automatically Update Text-to-speech (TTS) Papers Daily using Github Actions (Update Every 12th hours)☆532Updated this week
- AudioBench: A Universal Benchmark for Audio Large Language Models☆265Updated 4 months ago
- LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM☆286Updated 5 months ago
- LLaSA: Scaling Train-time and Inference-time Compute for LLaMA-based Speech Synthesis☆622Updated 6 months ago
- VoiceBench: Benchmarking LLM-Based Voice Assistants☆297Updated 2 months ago
- MU-LLaMA: Music Understanding Large Language Model☆289Updated 2 months ago
- ☆378Updated last year
- A Framework for Speech, Language, Audio, Music Processing with Large Language Model☆904Updated last month
- ☆314Updated 2 weeks ago
- Official implementation of the paper "Acoustic Music Understanding Model with Large-Scale Self-supervised Training".☆413Updated 4 months ago
- Whisper-Flamingo [Interspeech 2024] and mWhisper-Flamingo [IEEE SPL 2025] for Audio-Visual Speech Recognition and Translation☆182Updated 2 months ago
- Learning audio concepts from natural language supervision☆602Updated last year
- Implementation of Voicebox, new SOTA Text-to-speech network from MetaAI, in Pytorch☆666Updated last year
- Collection of resources on the applications of Large Language Models (LLMs) in Audio AI.☆694Updated last week
- [INTERSPEECH 2024] EmoBox: Multilingual Multi-corpus Speech Emotion Recognition Toolkit and Benchmark☆284Updated 6 months ago
- AudioLDM training, finetuning, evaluation and inference.☆277Updated 10 months ago
- Official PyTorch implementation of BigVGAN (ICLR 2023)☆1,119Updated last year
- Real-time Speech-Text Foundation Model Toolkit (wip)☆247Updated 6 months ago
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆245Updated this week