ssbuild / chatglm_rlhf
chatglm_rlhf_finetuning
☆28Updated last year
Alternatives and similar repositories for chatglm_rlhf
Users that are interested in chatglm_rlhf are comparing it to the libraries listed below
Sorting:
- moss chat finetuning☆50Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆107Updated last year
- deep learning☆149Updated last week
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- use chatGLM to perform text embedding☆45Updated 2 years ago
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆192Updated last year
- ☆160Updated 2 years ago
- 中文大语言模型评测第二期☆70Updated last year
- Baichuan-13B 指令微调☆90Updated last year
- baichuan LLM surpervised finetune by lora☆63Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- deepspeed+trainer简单高效实现多卡微调大模型☆125Updated last year
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆250Updated last year
- 中文 Instruction tuning datasets☆131Updated last year
- ☆43Updated last year
- LLM with LuXun (鲁迅) style☆84Updated last year
- ☆97Updated last year
- llama inference for tencentpretrain☆98Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- 零样本学习测评基准,中文版☆56Updated 3 years ago