kyutai-labs / moshivisLinks
Kyutai with an "eye"
☆227Updated 8 months ago
Alternatives and similar repositories for moshivis
Users that are interested in moshivis are comparing it to the libraries listed below
Sorting:
- Liquid Audio - Speech-to-Speech audio models by Liquid AI☆285Updated 2 months ago
- VoiceStar: Robust, Duration-controllable TTS that can Extrapolate☆297Updated 6 months ago
- ☆530Updated 2 months ago
- ☆314Updated 3 months ago
- The official GitHub Page for MiniMax☆60Updated last month
- LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM☆291Updated 6 months ago
- ☆158Updated 7 months ago
- ☆101Updated last year
- The official repo for paper "Spatial Speech Translation: Translating Across Space With Binaural Hearables"☆70Updated 3 months ago
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆345Updated 7 months ago
- ☆205Updated last month
- ☆338Updated 2 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆97Updated 7 months ago
- ☆250Updated 6 months ago
- Service for testing out the new Qwen2.5 omni model☆62Updated 7 months ago
- Collection of Open Source Speech Data☆163Updated 2 months ago
- Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B☆527Updated 2 weeks ago
- SlamKit is an open source tool kit for efficient training of SpeechLMs. It was used for "Slamming: Training a Speech Language Model on On…☆224Updated 6 months ago
- ☆476Updated 7 months ago
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆123Updated 4 months ago
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆103Updated 11 months ago
- Official repository for "VideoPrism: A Foundational Visual Encoder for Video Understanding" (ICML 2024)☆328Updated 2 months ago
- GRadient-INformed MoE☆264Updated last year
- ☆470Updated 6 months ago
- AudioStory: Generating Long-Form Narrative Audio with Large Language Models☆289Updated 2 months ago
- OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.☆591Updated last month
- An open source chat bot architecture for voice/vision (and multimodal) assistants, local(CPU/GPU bound) and remote(I/O bound) to run.☆88Updated this week
- An open-source implementation of Whisper☆466Updated last month
- A pipeline parallel training script for LLMs.☆164Updated 7 months ago