ymcui / Chinese-MixtralLinks
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
☆608Updated last year
Alternatives and similar repositories for Chinese-Mixtral
Users that are interested in Chinese-Mixtral are comparing it to the libraries listed below
Sorting:
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆653Updated last year
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆567Updated last year
- Yuan 2.0 Large Language Model☆689Updated last year
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆295Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆780Updated 8 months ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆213Updated 10 months ago
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆451Updated 2 years ago
- Repo for adapting Meta LlaMA2 in Chinese! META最新发布的LlaMA2的汉化版! (完全开源可商用)☆742Updated 2 years ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆413Updated last year
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆644Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆444Updated 10 months ago
- 活字通用大模型☆393Updated 11 months ago
- This is the first Chinese chat model specifically fine-tuned for Chinese through ORPO based on the Meta-Llama-3-8B-Instruct model.☆322Updated last year
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,188Updated 3 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆359Updated 2 years ago
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆418Updated 2 years ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,766Updated last month
- Play LLaMA2 (official / 中文版 / INT4 / llama2.cpp) Together! ONLY 3 STEPS! ( non GPU / 5GB vRAM / 8~14GB vRAM)☆542Updated 2 years ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆655Updated 2 years ago
- unified embedding model☆867Updated 2 years ago
- chatglm多gpu用deepspeed和☆410Updated last year
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆209Updated last year
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆678Updated 7 months ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆610Updated 7 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sour…☆1,433Updated 5 months ago
- ☆350Updated last year
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆400Updated 2 years ago
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,013Updated last year
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆267Updated 2 years ago