windmaple / Gemma-Chinese-instruction-tuningLinks
演示Gemma中文指令微调的教程
☆46Updated last year
Alternatives and similar repositories for Gemma-Chinese-instruction-tuning
Users that are interested in Gemma-Chinese-instruction-tuning are comparing it to the libraries listed below
Sorting:
- ☆174Updated last year
- 大语言模型训练和服务调研☆37Updated 2 years ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆69Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆213Updated 2 years ago
- qwen models finetuning☆105Updated 10 months ago
- 想要从零开始训练一个中文的mini大语言模型,可以进行基本的对话,模型大小根据手头的机器决定☆65Updated last year
- ☆106Updated 2 years ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆217Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆221Updated 2 years ago
- deep learning☆149Updated 8 months ago
- ☆235Updated last year
- baichuan and baichuan2 finetuning and alpaca finetuning☆33Updated 10 months ago
- Its an open source LLM based on MOE Structure.☆58Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆140Updated last year
- 中文基于满血DeepSeek-R1蒸馏数据集☆63Updated 11 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆265Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- GLM Series Edge Models☆156Updated 7 months ago
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆39Updated last year
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中 文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆295Updated last year
- 千问14B和7B的逐行解释☆63Updated 2 years ago
- 从头开始训练一个chatglm小模型☆49Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆120Updated 2 years ago
- 本项目致力于为大模型领域的初学者提供全面的知识体系,包括基础和高阶内容,以便开发者能迅速掌握大模型技术栈并全面了解相关知识。☆62Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆89Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated 2 years ago
- 中文原生检索增强生成测评基准☆124Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago