yongzhuo / gemma-sftLinks
Gemma-SFT, gemma-2b/gemma-7b微调(finetune,transformers)/LORA(peft)/推理(inference)
☆33Updated last year
Alternatives and similar repositories for gemma-sft
Users that are interested in gemma-sft are comparing it to the libraries listed below
Sorting:
- ☆174Updated last year
- 大语言模型训练和服务调研☆37Updated 2 years ago
- Its an open source LLM based on MOE Structure.☆58Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- 基于ChatGLM2-6B进行微调,包括全参数、参数有效性、量化感知训练等,可实现指令微调、多轮对话微调等。☆26Updated 2 years ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆69Updated last year
- LLaMA Factory Document☆161Updated 2 weeks ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆217Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- YiZhao: A 2TB Open Financial Corpus. Data and tools for generating and inspecting YiZhao, a safe, high-quality, open-source bilingual fin…☆38Updated 6 months ago
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆39Updated last year
- 千问14B和7B的逐行解释☆63Updated 2 years ago
- Baichuan2代码的逐行解析版本,适合小白☆213Updated 2 years ago
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆221Updated 2 years ago
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Updated 2 years ago
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆47Updated 3 months ago
- ☆83Updated last year
- 本项目致力于为大模型领域的初学者提供全面的知识体系,包括基础和高阶内容,以便开发者能迅速掌握大模型技术栈并全面了解相关知识。☆62Updated last year
- deep learning☆149Updated 8 months ago
- qwen models finetuning☆105Updated 10 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆58Updated last year
- 想要从零开始训练一个中文的mini大语言模型,可以进行基本的对话,模型大小根据手头的机器决定☆65Updated last year
- ☆96Updated last year
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- ☆96Updated last year
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆74Updated 11 months ago
- CodeGPT: A Code-Related Dialogue Dataset Generated by GPT and for GPT☆114Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year