dourgey / qwen2_moe_mergekitLinks
根据Qwen2(Qwen1.5)模型生成qwen2 MoE模型的工具
☆16Updated last year
Alternatives and similar repositories for qwen2_moe_mergekit
Users that are interested in qwen2_moe_mergekit are comparing it to the libraries listed below
Sorting:
- ☆118Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆339Updated last year
- ☆114Updated 10 months ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆67Updated last year
- 基于Llama3,通过进一步CPT,SFT,ORPO得到的中文版Llama3☆17Updated last year
- 使用单个24G显卡,从0开始训练LLM☆55Updated 2 months ago
- 对llama3进行全参微调、lora微调以及qlora微调。☆208Updated 11 months ago
- 使用 Qwen2ForSequenceClassification 简单实现文本分类任务。☆82Updated last year
- Inference code for LLaMA models☆123Updated 2 years ago
- ☆53Updated 2 years ago
- 更纯粹、更高压缩率的Tokenizer☆482Updated 9 months ago
- 阿里通义千问(Qwen-7B-Chat/Qwen-7B), 微调/LORA/推理☆115Updated last year
- qwen models finetuning☆103Updated 6 months ago
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆119Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- 怎么训练一个LLM分词器☆152Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数 据竞赛☆68Updated 7 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆470Updated 4 months ago
- ☆231Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- 多轮共情对话模型PICA☆97Updated 2 years ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆248Updated 10 months ago
- 天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案☆32Updated last year
- ☆74Updated 3 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 9 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆735Updated 4 months ago
- 用于汇总目前的开源中文对话数据集☆179Updated 2 years ago
- llama2 finetuning with deepspeed and lora☆176Updated 2 years ago
- Model Compression for Big Models☆165Updated 2 years ago