qibin0506 / llm_trainerLinks
☆42Updated this week
Alternatives and similar repositories for llm_trainer
Users that are interested in llm_trainer are comparing it to the libraries listed below
Sorting:
- pytorch distribute tutorials☆160Updated 5 months ago
- Qwen2.5 0.5B GRPO☆71Updated 9 months ago
- 这是一个从头训练大语言模型的项目,包括预训练、微调和直接偏好优化,模型拥有1B参数,支持中英文。☆683Updated 9 months ago
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆159Updated last year
- Train a 1B LLM with 1T tokens from scratch by personal☆759Updated 7 months ago
- TinyRAG☆378Updated 5 months ago
- 从0开始,将chatgpt的技术路线跑一遍。☆268Updated last year
- pytorch复现transformer☆88Updated last year
- Inference code for LLaMA models☆128Updated 2 years ago
- 模型压缩的小白入门教程,PDF下载地址 https://github.com/datawhalechina/awesome-compression/releases☆341Updated last week
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆183Updated 2 years ago
- llm & rl☆256Updated last month
- 一个很小很小的RAG系统☆322Updated 7 months ago
- LLM Tokenizer with BPE algorithm☆44Updated last year
- My implementation of Stanford CS336 assignments.☆209Updated 4 months ago
- 大模型/LLM推理和部署理论与实践☆362Updated 4 months ago
- MindSpore online courses: Step into LLM☆481Updated 2 weeks ago
- 从零实现一个小参数量中文大语言模型。☆891Updated last year
- ☆400Updated 9 months ago
- 从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)☆503Updated 8 months ago
- DeepSpeed Tutorial☆104Updated last year
- ☆150Updated 5 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆354Updated last year
- 欢迎来到 LLM-Dojo,这里是一个开源大模型学习场所,使用简洁且易阅读的代码构建模型训练框架(支持各种主流模型如Qwen、Llama、GLM等等)、RLHF框架(DPO/CPO/KTO/PPO)等各种功能。👩🎓👨🎓☆901Updated this week
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆48Updated 3 months ago
- DeepSeek 系列工作解读、扩展和复现。☆688Updated 8 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆480Updated 7 months ago
- LLM大模型(重点)以及搜广推等 AI 算法中手写的面试题,(非 LeetCode),比如 Self-Attention, AUC等,一般比 LeetCode 更考察一个人的综合能力,又更贴近业务和基础知识一点☆446Updated 11 months ago
- 个人构建MoE大模型:从预训练到DPO的完整实践☆1,885Updated last month
- 🏆🏆 「大模型」All in one & All from scratch. 🌍🌍 收集、清洗数据,训练Tokenizer,预训练、SFT、GRPO!☆49Updated 3 months ago