xverse-ai / XVERSE-V-13BLinks
☆80Updated last year
Alternatives and similar repositories for XVERSE-V-13B
Users that are interested in XVERSE-V-13B are comparing it to the libraries listed below
Sorting:
- ☆173Updated 6 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆248Updated last week
- 基于baichuan-7b的开源多模态大语言模型☆72Updated last year
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆126Updated 9 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 11 months ago
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆100Updated last year
- Our 2nd-gen LMM☆34Updated last year
- ☆39Updated 10 months ago
- GLM Series Edge Models☆147Updated 2 months ago
- ☆70Updated 2 years ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆142Updated 9 months ago
- Chinese CLIP models with SOTA performance.☆56Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- A light proxy solution for HuggingFace hub.☆47Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆124Updated 9 months ago
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆77Updated last year
- Mixture-of-Experts (MoE) Language Model☆189Updated 11 months ago
- ☆231Updated last year
- ☆28Updated last year
- ☆29Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆43Updated 6 months ago
- ☆73Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆163Updated last month
- SUS-Chat: Instruction tuning done right☆49Updated last year
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆214Updated 3 months ago
- Just for debug☆56Updated last year
- ☆57Updated last year