OctopusMind / DPO
dpo算法实现
☆33Updated 9 months ago
Alternatives and similar repositories for DPO:
Users that are interested in DPO are comparing it to the libraries listed below
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 11 months ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated 2 years ago
- 使用单个24G显卡,从0开始训练LLM☆50Updated 5 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- llama,chatglm 等模型的微调☆86Updated 8 months ago
- 中文 Instruction tuning datasets☆129Updated 11 months ago
- 基于DPO算法微调语言大模型,简单好上手。☆35Updated 8 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆171Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆117Updated last year
- FLASHQuad_pytorch☆67Updated 3 years ago
- A paper list of pre-trained language models (PLMs).☆80Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- ☆84Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆53Updated last year
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆152Updated 5 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆72Updated last year
- ☆105Updated 4 months ago
- 天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案☆28Updated 8 months ago
- ChatGLM-6B添加了RLHF的实现,以及部分核心代码的逐行讲解 ,实例部分是做了个新闻短标题的生成,以及指定context推荐的RLHF的实现☆82Updated last year
- Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)☆77Updated last year
- ChatGPT相关资源汇总☆55Updated last year
- 怎么训练一个LLM分词 器☆142Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆77Updated 4 months ago
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆43Updated 2 weeks ago
- GoGPT:基于Llama/Llama 2训练的中英文增强大模型|Chinese-Llama2☆78Updated last year
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆62Updated last month
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆120Updated last year
- ☆34Updated last month