lansinuote / Simple_LLM_PPO
☆36Updated last month
Related projects: ⓘ
- 使用单个24G显卡,从0开始训练LLM☆47Updated 2 months ago
- ☆59Updated 10 months ago
- 阿里天池: 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答 baseline 80+☆63Updated 8 months ago
- pytorch分布式训练☆57Updated last year
- 怎么训练一个LLM分词器☆123Updated last year
- Train a Chinese LLM From 0 by Personal☆145Updated last week
- ☆79Updated 2 months ago
- ChatGLM2-6B-Explained☆33Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆40Updated 4 months ago
- 一些 LLM 方面的从零复现笔记☆105Updated 3 months ago
- deepspeed+trainer简单高效实现多卡微调大模型☆115Updated last year
- ☆90Updated 6 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆61Updated 4 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆162Updated 8 months ago
- 基于DPO算法微 调语言大模型,简单好上手。☆24Updated 2 months ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆151Updated 6 months ago
- 一套代码指令微调大模型☆36Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆107Updated last year
- ☆124Updated 2 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆64Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆58Updated last year
- llama,chatglm 等模型的微调☆79Updated 2 months ago
- ☆48Updated 3 weeks ago
- ☆77Updated 2 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated 5 months ago
- 文本去重☆65Updated 3 months ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆240Updated last year
- 用于AIOPS24挑战赛的Demo☆53Updated 3 months ago
- baichuan LLM surpervised finetune by lora☆57Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆107Updated last year