thinksoso / ChatGLM-Instruct-TuningLinks
微调ChatGLM
☆128Updated 2 years ago
Alternatives and similar repositories for ChatGLM-Instruct-Tuning
Users that are interested in ChatGLM-Instruct-Tuning are comparing it to the libraries listed below
Sorting:
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆165Updated 2 years ago
- 基于sentence transformers和chatglm实现的文档搜索工具☆157Updated 2 years ago
- chatglm多gpu用deepspeed和☆412Updated last year
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆401Updated 2 years ago
- ChatGLM-6B fine-tuning.☆136Updated 2 years ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆389Updated 2 years ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆90Updated 2 years ago
- TechGPT: Technology-Oriented Generative Pretrained Transformer☆226Updated 2 years ago
- 🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答☆333Updated 2 years ago
- 打造人人都会的NLP,开源不易,记得star 哦☆101Updated 2 years ago
- SMP 2023 ChatGLM金融大模型挑战赛 60 分baseline思路介绍☆186Updated 2 years ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆654Updated 2 years ago
- Humanable Chat Generative-model Fine-tuning | LLM微调☆206Updated 2 years ago
- alpaca中文指令微调数据集☆396Updated 2 years ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆254Updated 2 years ago
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆195Updated 2 years ago
- llama信息抽取实战☆100Updated 2 years ago
- 骆驼QA,中文大语言阅读理解模型。☆75Updated 2 years ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆357Updated 2 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆171Updated last year
- 支持ChatGLM2 lora微调☆41Updated 2 years ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆413Updated 2 years ago
- Baichuan-13B 指令微调☆90Updated 2 years ago
- Ziya-LLaMA-13B是IDEA基于LLaMa的130亿参数的大规模预训练模型,具备翻译,编程,文本分类,信息抽取,摘要,文案生成,常识问答和数学计算等能力。目前姜子牙通用大模型已完成大规模预训练、多任务有监督微调和人类反馈学习三阶段的训练过程。本文主要用于Ziya-…☆45Updated 2 years ago
- Integrating ONgDB database into langchain ecosystem☆77Updated 2 years ago
- SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding☆226Updated last year
- "桃李“: 国际中文教育大模型☆185Updated last year
- kbqa,langchain,large langauge model, chatgpt☆81Updated last year
- pCLUE: 1000000+多任务提示学习数据集☆500Updated 3 years ago