dourgey / qwen2_moe_mergekit
根据Qwen2(Qwen1.5)模型生成qwen2 MoE模型的工具
☆15Updated 11 months ago
Alternatives and similar repositories for qwen2_moe_mergekit:
Users that are interested in qwen2_moe_mergekit are comparing it to the libraries listed below
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated last year
- 对llama3进行全参微调、lora微调以及qlora微调。☆187Updated 5 months ago
- ☆105Updated 8 months ago
- 天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案☆27Updated 8 months ago
- 怎么训练一个LLM分词器☆142Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆55Updated 10 months ago
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆116Updated last year
- LLM+RAG for QA☆22Updated last year
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆58Updated last month
- ☆135Updated 10 months ago
- ☆102Updated this week
- ☆105Updated 4 months ago
- A wide variety of research projects developed by the SpokenNLP team of Speech Lab, Alibaba Group.☆115Updated 2 months ago
- ☆15Updated 11 months ago
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆62Updated last month
- 使用单个24G显卡,从0开始训练LLM☆50Updated 5 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 11 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆76Updated 4 months ago
- 基于Llama3,通过进一步CPT,SFT,ORPO得到的中文版Llama3☆17Updated 11 months ago
- 阿里通义千问(Qwen-7B-Chat/Qwen-7B), 微调/LORA/推理☆85Updated 10 months ago
- qwen models finetuning☆93Updated 2 weeks ago
- 基于DPO算法微调语言大模型,简单好上手。☆31Updated 8 months ago
- 通义千问的DPO训练☆40Updated 6 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆303Updated 8 months ago
- pytorch分布式训练☆64Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆43Updated last week
- ☆100Updated 11 months ago
- DeepSpeed Tutorial☆95Updated 7 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆129Updated 9 months ago
- Inference code for LLaMA models☆118Updated last year