windmaple / Gemma-Chinese-instruction-tuningLinks
演示Gemma中文指令微调的教程
☆46Updated last year
Alternatives and similar repositories for Gemma-Chinese-instruction-tuning
Users that are interested in Gemma-Chinese-instruction-tuning are comparing it to the libraries listed below
Sorting:
- ☆170Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆213Updated 2 years ago
- Imitate OpenAI with Local Models☆89Updated last year
- 想要从零开始训练一个中文的mini大语言模型,可以进行基本的对话,模型大小根据手头的机器决定☆65Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆69Updated last year
- 大语言模型训练和服务调研☆36Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- ☆235Updated last year
- Awesome Chinese LLM: A curated list of Chinese Large Language Model 中文大语言模型数据集和模型资料汇总☆163Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆139Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆217Updated last year
- qwen models finetuning☆105Updated 9 months ago
- A light proxy solution for HuggingFace hub.☆48Updated 2 years ago
- GLM Series Edge Models☆156Updated 6 months ago
- SUS-Chat: Instruction tuning done right☆49Updated last year
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆79Updated last year
- ☆106Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆120Updated 2 years ago
- 中文书籍收录整理, Collection of Chinese Books☆202Updated 2 years ago
- Its an open source LLM based on MOE Structure.☆58Updated last year
- OpenLLaMA-Chinese, a permissively licensed open source instruction-following models based on OpenLLaMA☆66Updated 2 years ago
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆295Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆38Updated last year
- 本项目致力于为大模型领域的初学者提供全面的知识体系,包括基础和高阶内容,以便开发者能迅速掌握大模型技术栈并全面了解相关知识。☆62Updated 11 months ago
- 更纯粹、更高压缩率的Tokenizer☆488Updated last year
- 中文预训练ModernBert☆95Updated 8 months ago