dourgey / qwen2_moe_mergekitLinks
根据Qwen2(Qwen1.5)模型生成qwen2 MoE模型的工具
☆16Updated last year
Alternatives and similar repositories for qwen2_moe_mergekit
Users that are interested in qwen2_moe_mergekit are comparing it to the libraries listed below
Sorting:
- ☆115Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆367Updated last year
- qwen models finetuning☆106Updated 10 months ago
- ☆120Updated last year
- 使用单个24G显卡,从0开始训练LLM☆56Updated 6 months ago
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆69Updated last year
- 多轮共情对话模型PICA☆97Updated 2 years ago
- 基于DPO算法微调语言大模型,简单好上手。☆50Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆47Updated last year
- 基于Llama3,通过进一步CPT,SFT,ORPO得到的中文版Llama3☆17Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆120Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Model Compression for Big Models☆167Updated 2 years ago
- 阿里通义千问(Qwen-7B-Chat/Qwen-7B), 微调/LORA/推理☆134Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆272Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆417Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆137Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆47Updated 3 months ago
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆67Updated 11 months ago
- Inference code for LLaMA models☆128Updated 2 years ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆260Updated last year
- ☆235Updated last year
- a toolkit on knowledge distillation for large language models☆261Updated last month
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆74Updated 11 months ago
- ☆184Updated 2 years ago
- ☆125Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated 2 years ago
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆223Updated 2 years ago