xverse-ai / XVERSE-7BLinks
XVERSE-7B: A multilingual large language model developed by XVERSE Technology Inc.
☆54Updated last year
Alternatives and similar repositories for XVERSE-7B
Users that are interested in XVERSE-7B are comparing it to the libraries listed below
Sorting:
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆40Updated last year
- 全球首个StableVicuna中文优化版。☆63Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆72Updated last year
- zero零训练llm调参☆32Updated 2 years ago
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- GPT+神器,简单实用的一站式AGI架构,内置本地化,LLM模型 ,agent,矢量数据库,智能链chain☆48Updated 2 years ago
- AGM阿格姆:AI基因图谱模型,从token-weight权重微粒角度,探索AI模型,GPT\LLM大模型的内在运作机制。☆29Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆69Updated 2 years ago
- Tracking the hot Github repos and update daily 每天自动追踪Github热门项目☆49Updated last week
- SUS-Chat: Instruction tuning done right☆49Updated last year
- SuperCLUE琅琊榜:中文通用大模型匿名对战评价基准☆145Updated last year
- A light proxy solution for HuggingFace hub.☆47Updated last year
- AGI模块库架构图☆76Updated 2 years ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory☆28Updated last year
- Another ChatGLM2 implementation for GPTQ quantization☆55Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated 2 years ago
- ☆28Updated last year
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆100Updated last year
- Llama2开源模型中文版-全方位测评,基于SuperCLUE的OPEN基准 | Llama2 Chinese evaluation with SuperCLUE☆127Updated 2 years ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆56Updated 9 months ago
- share data, prompt data , pretraining data☆36Updated last year
- OrionStar-Yi-34B-Chat 是一款开源中英文Chat模型,由猎户星空基于Yi-34B开源模型、使用15W+高质量语料微调而成。☆261Updated last year
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆38Updated 2 years ago
- A more efficient GLM implementation!☆54Updated 2 years ago
- 百度QA100万数据集☆48Updated last year
- Gaokao Benchmark for AI☆108Updated 3 years ago
- ☆106Updated last year