chatglm多gpu用deepspeed和
☆408Jul 8, 2024Updated last year
Alternatives and similar repositories for Chatglm_lora_multi-gpu
Users that are interested in Chatglm_lora_multi-gpu are comparing it to the libraries listed below
Sorting:
- ChatGLM-6B 指令学习|指令数据|Instruct☆653Apr 10, 2023Updated 2 years ago
- 基于ChatGLM-6B + LoRA的Fintune方案☆3,759Nov 25, 2023Updated 2 years ago
- 探索中文instruct数 据在ChatGLM, LLaMA上的微调表现☆389Apr 4, 2023Updated 2 years ago
- chatglm 6b finetuning and alpaca finetuning☆1,536Mar 9, 2025Updated 11 months ago
- ChatGLM-6B fine-tuning.☆136Apr 25, 2023Updated 2 years ago
- Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调☆3,731Oct 12, 2023Updated 2 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆165Aug 24, 2023Updated 2 years ago
- ⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SF…☆2,409Sep 29, 2023Updated 2 years ago
- Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)☆718Jul 18, 2023Updated 2 years ago
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,281Oct 16, 2024Updated last year
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,055Apr 14, 2024Updated last year
- 基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等☆2,777Dec 12, 2023Updated 2 years ago
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,194May 3, 2025Updated 10 months ago
- 多显卡部署版 | ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model☆62Mar 26, 2023Updated 2 years ago
- deepspeed+trainer简单高效实现多卡微 调大模型☆132May 27, 2023Updated 2 years ago
- Repo for Chinese Medical ChatGLM 基于中文医学知识的ChatGLM指令微调☆1,034May 19, 2023Updated 2 years ago
- alpaca中文指令微调数据集☆397Mar 26, 2023Updated 2 years ago
- Humanable Chat Generative-model Fine-tuning | LLM微调☆206Sep 22, 2023Updated 2 years ago
- Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca☆4,136Apr 18, 2025Updated 10 months ago
- 中文nlp解决方案(大模型、数据、模型、训练、推理)☆3,779Aug 5, 2025Updated 7 months ago
- Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、…☆6,638Oct 24, 2024Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,797Dec 12, 2023Updated 2 years ago
- Panda项目是于2023年5月启动的开源海外中文大语言模型项目,致力于大模型时代探索整个技术栈,旨在推动中文自然语言处理领域的创新和合作。☆1,036Oct 19, 2023Updated 2 years ago
- use chatGLM to perform text embedding☆45Apr 9, 2023Updated 2 years ago
- TextGen: Implementation of Text Generation models, include LLaMA, BLOOM, GPT2, BART, T5, SongNet and so on. 文本生成模型,实现了包括LLaMA,ChatGLM,BLO…☆980Sep 14, 2024Updated last year
- 中文langchain项目|小必应,Q.Talk,强聊,QiangTalk☆2,830Jun 20, 2023Updated 2 years ago
- 骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技☆3,618Sep 3, 2023Updated 2 years ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,089Aug 4, 2024Updated last year
- An open-source conversational language model developed by the Knowledge Works Research Laboratory at Fudan University.☆64Oct 12, 2023Updated 2 years ago
- 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)☆18,971Jul 15, 2025Updated 7 months ago
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆402Aug 17, 2023Updated 2 years ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Jul 19, 2023Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物, 这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆112Apr 1, 2023Updated 2 years ago
- ✅4g GPU可用 | 简易实现ChatGLM单机调用多个计算设备(GPU、CPU)进行推理☆34Apr 20, 2023Updated 2 years ago
- Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调☆291Jun 7, 2024Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆619Jan 24, 2025Updated last year
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆198May 23, 2023Updated 2 years ago
- The online version is temporarily unavailable because we cannot afford the key. You can clone and run it locally. Note: we set defaul ope…☆828May 28, 2024Updated last year
- fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆4,161Updated this week