modelscope / flowraLinks
☆72Updated 2 months ago
Alternatives and similar repositories for flowra
Users that are interested in flowra are comparing it to the libraries listed below
Sorting:
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- ☆147Updated 6 months ago
- Official Repo For the [AAAI'26 Oral] Paper “StyleTailor: Towards Personalized Fashion Styling via Hierarchical Negative Feedback”☆30Updated 2 months ago
- [EMNLP 2025 Demo] PresentAgent: Multimodal Agent for Presentation Video Generation☆128Updated 2 months ago
- ☆194Updated 2 months ago
- ☆82Updated last month
- Our 2nd-gen LMM☆34Updated last year
- ☆132Updated 7 months ago
- ☆17Updated 6 months ago
- Fathom-DeepResearch: Unlocking Long Horizon Information Retrieval And Synthesis For SLMs☆52Updated 4 months ago
- ☆93Updated this week
- Demo for Qwen2.5-VL-3B-Instruct on Axera device.☆17Updated 5 months ago
- ☆35Updated last year
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆146Updated last year
- ☆18Updated 9 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆284Updated 4 months ago
- ☆95Updated 11 months ago
- Cook up amazing multimodal AI applications effortlessly with MiniCPM-o☆290Updated this week
- ☆30Updated 5 months ago
- Kaleido: Open-sourced multi-subject reference video generation model, enabling controllable, high-fidelity video synthesis from multiple …☆112Updated last month
- GLM Series Edge Models☆157Updated 7 months ago
- Tencent Hunyuan 7B (short as Hunyuan-7B) is one of the large language dense models of Tencent Hunyuan☆71Updated 5 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- ☆16Updated 6 months ago
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆47Updated 6 months ago
- ☆34Updated 3 months ago
- ☆53Updated 6 months ago
- ☆18Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆46Updated 4 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆58Updated last year