gpt-omni / mini-omni2Links
Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。
☆1,788Updated 6 months ago
Alternatives and similar repositories for mini-omni2
Users that are interested in mini-omni2 are comparing it to the libraries listed below
Sorting:
- open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming…☆3,385Updated 9 months ago
- ✨✨VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆629Updated 2 months ago
- [ICLR 2025] SOTA discrete acoustic codec models with 40/75 tokens per second for audio language modeling☆1,178Updated 5 months ago
- Build multimodal language agents for fast prototype and production☆2,543Updated 4 months ago
- ✨✨VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,378Updated 4 months ago
- [ICLR 2025] Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation☆3,604Updated 5 months ago
- Align Anything: Training All-modality Model with Feedback☆4,455Updated 2 months ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆356Updated 2 months ago
- Turn detection for full-duplex dialogue communication☆371Updated this week
- PyTorch implementation of [ThinkSound], a unified framework for generating audio from any modality, guided by Chain-of-Thought (CoT) reas…☆939Updated 3 weeks ago
- Inference code for the paper "Spirit-LM Interleaved Spoken and Written Language Model".☆917Updated 9 months ago
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.☆1,826Updated 3 months ago
- "Vimo: Chat with Your Videos"☆861Updated last week
- ☆928Updated 4 months ago
- [CVPR 2025] Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Video Diffusion Transformer☆1,296Updated 4 months ago
- On device AI inference in minutes—now for MLX & GGUF, with Android, iOS and NPU backends coming soon.☆4,644Updated this week
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,051Updated this week
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"☆745Updated this week
- ☆1,528Updated this week
- ☆430Updated 3 months ago
- [ICLR 2025 Oral] TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio-Motion Embedding and Diffusion Interpolation☆1,081Updated last month
- OpenMusic: SOTA Text-to-music (TTM) Generation☆602Updated last month
- Resources of our paper "FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces". New versions in the maki…☆1,009Updated 4 months ago
- 🔥 [ICCV 2025 Highlight] InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity☆2,571Updated 2 weeks ago
- Dolphin is a multilingual, multitask ASR model jointly trained by DataoceanAI and Tsinghua University.☆580Updated 3 weeks ago
- Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI (Kunlun Inc.), specializing in vision-language reasoning.☆2,927Updated last week
- Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models☆915Updated 4 months ago
- Allegro is a powerful text-to-video model that generates high-quality videos up to 6 seconds at 15 FPS and 720p resolution from simple te…☆1,090Updated 6 months ago
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆2,970Updated 2 months ago
- ✨✨Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM☆334Updated 2 months ago