ymcui / Chinese-Mixtral
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
☆597Updated 9 months ago
Alternatives and similar repositories for Chinese-Mixtral:
Users that are interested in Chinese-Mixtral are comparing it to the libraries listed below
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆647Updated 6 months ago
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆529Updated 7 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆726Updated 2 months ago
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆439Updated last year
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆406Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,691Updated last year
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆421Updated last year
- Alpaca Chinese Dataset -- 中文指令微调数据集【人工+GPT4o持续更新】☆192Updated 4 months ago
- 活字通用大模型☆376Updated 5 months ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆653Updated last year
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆604Updated last month
- 中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。☆1,410Updated 10 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆356Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆440Updated 4 months ago
- Repo for adapting Meta LlaMA2 in Chinese! META最新发布的LlaMA2的汉化版! (完全开源可商用)☆748Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆984Updated 9 months ago
- chatglm多gpu用deepspeed和☆405Updated 7 months ago
- unified embedding model☆849Updated last year
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,166Updated 9 months ago
- [EMNLP'24] CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models☆447Updated last month
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆295Updated 9 months ago
- C++ implementation of Qwen-LM☆577Updated 2 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆395Updated 9 months ago
- 更纯粹、更高压缩率的Tokenizer☆471Updated 2 months ago
- 中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3☆1,867Updated 4 months ago
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆648Updated 10 months ago
- Baichuan2代码的逐行解析版本,适合小白☆212Updated last year
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆263Updated last year
- 通义千问VLLM推理部署DEMO☆521Updated 10 months ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆185Updated 9 months ago