xverse-ai / XVERSE-13B
XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.
☆646Updated 11 months ago
Alternatives and similar repositories for XVERSE-13B:
Users that are interested in XVERSE-13B are comparing it to the libraries listed below
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆440Updated 5 months ago
- Repo for adapting Meta LlaMA2 in Chinese! META最新发布的LlaMA2的汉化版! (完全开源可商用)☆746Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆596Updated 2 months ago
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,174Updated 10 months ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆408Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆655Updated last year
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆420Updated last year
- 🩹Editing large language models within 10 seconds⚡☆1,317Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,719Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆988Updated 11 months ago
- 骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技☆716Updated last year
- chatglm 6b finetuning and alpaca finetuning☆1,541Updated 2 weeks ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆358Updated last year
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆390Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆749Updated 3 months ago
- unified embedding model☆853Updated last year
- WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)☆1,583Updated 3 months ago
- Yuan 2.0 Large Language Model☆685Updated 8 months ago
- alpaca中文指令微调数据集☆392Updated 2 years ago
- Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)☆722Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆327Updated 8 months ago
- pCLUE: 1000000+多任务提示学习数据集☆486Updated 2 years ago
- TigerBot: A multi-language multi-task LLM☆2,257Updated 3 months ago
- Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sour…☆1,285Updated 3 weeks ago
- chatglm多gpu用deepspeed和☆408Updated 8 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆619Updated 2 months ago
- Official codes for ACL 2023 paper "WebCPM: Interactive Web Search for Chinese Long-form Question Answering"☆918Updated last year
- 语言模型中文认知能力分析☆236Updated last year
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆601Updated 10 months ago
- 聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)☆622Updated last year