GiantAILab / DeepSound-V1Links
Official code for DeepSound-V1
☆12Updated 4 months ago
Alternatives and similar repositories for DeepSound-V1
Users that are interested in DeepSound-V1 are comparing it to the libraries listed below
Sorting:
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆25Updated 2 weeks ago
- ☆22Updated 3 weeks ago
- [ICCV 2025] FonTS: Text Rendering with Typography and Style Controls☆27Updated 3 weeks ago
- This repository contains the code for our ICML 2025 paper——LENSLLM: Unveiling Fine-Tuning Dynamics for LLM Selection🎉☆24Updated 3 months ago
- [ICML 2025] This is the official PyTorch implementation of "🎵 HarmoniCa: Harmonizing Training and Inference for Better Feature Caching i…☆42Updated 2 months ago
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆98Updated 2 months ago
- [CVPR 2025] Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation☆60Updated last week
- Smoothed Preference Optimization via ReNoise Inversion for Aligning Diffusion Models with Varied Human Preferences (ICML 2025)☆23Updated 2 months ago
- [IJCV 2025] Smaller But Better: Unifying Layout Generation with Smaller Large Language Models☆146Updated last month
- [ICML2025] Official Code of From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection☆22Updated 2 months ago
- ☆55Updated 4 months ago
- ☆20Updated 2 months ago
- (ArXiv25) Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning☆55Updated last month
- [CVPR 2025] VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsification☆34Updated 5 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆215Updated last month
- 🚀 Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆31Updated last month
- Exploring and mitigating semantic hallucinations in scene text perception and reasoning☆14Updated 3 months ago
- [arXiv 25] Aesthetics is Cheap, Show me the Text: An Empirical Evaluation of State-of-the-Art Generative Models for OCR☆227Updated 3 weeks ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 5 months ago
- ☆35Updated 3 weeks ago
- [ICML 2025 Oral] An official implementation of VideoRoPE & VideoRoPE++☆192Updated last month
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆59Updated last week
- 📚 Collection of token-level model compression resources.☆158Updated 2 weeks ago
- Doodling our way to AGI ✏️ 🖼️ 🧠☆102Updated 3 months ago
- [IEEE TPAMI 2025] Privacy-Preserving Biometric Verification With Handwritten Random Digit String☆63Updated last month
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆56Updated 2 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆63Updated 4 months ago
- UniGenBench: A Unified T2I Generation Benchmark☆47Updated this week
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆103Updated 3 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆29Updated last week