ssbuild / chatglm_rlhf
chatglm_rlhf_finetuning
☆27Updated 11 months ago
Related projects: ⓘ
- moss chat finetuning☆50Updated 4 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆45Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆107Updated last year
- 中文大语言模型评测第二期☆68Updated 10 months ago
- ☆42Updated 9 months ago
- ☆90Updated 5 months ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- realize the reinforcement learning training for gpt2 llama bloom and so on llm model☆26Updated last year
- 文本去重☆65Updated 3 months ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆86Updated 5 months ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆54Updated last year
- 支持ChatGLM2 lora微调☆39Updated last year
- 零样本学习测评基准,中文版☆54Updated 3 years ago
- deepspeed+trainer简单高效实现多卡微调大模型☆115Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated 5 months ago
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆134Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆58Updated last year
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆47Updated last year
- LLM with LuXun (鲁迅) style☆75Updated last year
- 中文大语言模型评测第一期☆105Updated 10 months ago
- ☆23Updated last year
- (NBCE)Naive Bayes-based Context Extension on ChatGLM-6b☆14Updated last year
- ☆156Updated last year
- ☆172Updated last year
- 多轮共情对话模型PICA☆83Updated last year
- 骆驼QA,中文大语言阅读理解模型。☆71Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆64Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆68Updated 11 months ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆75Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆21Updated 5 months ago