xverse-ai / XVERSE-13BLinks
XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.
☆645Updated last year
Alternatives and similar repositories for XVERSE-13B
Users that are interested in XVERSE-13B are comparing it to the libraries listed below
Sorting:
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆444Updated 9 months ago
- Repo for adapting Meta LlaMA2 in Chinese! META最新发布的LlaMA2的汉化版! (完全开源可商用)☆742Updated last year
- Yuan 2.0 Large Language Model☆689Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆609Updated 6 months ago
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,183Updated 2 months ago
- 骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技☆721Updated 2 years ago
- TigerBot: A multi-language multi-task LLM☆2,256Updated 7 months ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆652Updated 2 years ago
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆417Updated 2 years ago
- WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)☆1,602Updated 4 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆361Updated last year
- Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)☆720Updated 2 years ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆774Updated 7 months ago
- alpaca中文指令微调数据集☆392Updated 2 years ago
- chatglm多gpu用deepspeed和☆407Updated last year
- 🩹Editing large language models within 10 seconds⚡☆1,339Updated last year
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆266Updated last year
- Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sour…☆1,409Updated 4 months ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆410Updated last year
- This repo contains the data preparation, tokenization, training and inference code for BLOOMChat. BLOOMChat is a 176 billion parameter mu…☆584Updated last year
- Open Multilingual Chatbot for Everyone☆1,270Updated last month
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆236Updated 2 years ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆390Updated 2 years ago
- 中文书籍收录整理, Collection of Chinese Books☆189Updated last year
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,078Updated 11 months ago
- something like visual-chatgpt, 文心一言的开源版☆1,201Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆338Updated 3 months ago
- unified embedding model☆864Updated last year
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆451Updated 2 years ago
- chatglm 6b finetuning and alpaca finetuning☆1,545Updated 4 months ago