bobo0810 / LearnDeepSpeedLinks
DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)
☆180Updated 2 years ago
Alternatives and similar repositories for LearnDeepSpeed
Users that are interested in LearnDeepSpeed are comparing it to the libraries listed below
Sorting:
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆250Updated last year
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆123Updated last year
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆76Updated last year
- ☆205Updated 2 weeks ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆351Updated last year
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆281Updated 2 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆144Updated 7 months ago
- DeepSpeed Tutorial☆102Updated last year
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆730Updated last week
- ☆396Updated 9 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆495Updated last week
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆382Updated last year
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆127Updated last year
- llm & rl☆243Updated 3 weeks ago
- ☆103Updated last year
- 青稞Talk☆160Updated last week
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆159Updated last year
- ☆124Updated last year
- Efficient Multimodal Large Language Models: A Survey☆376Updated 6 months ago
- 包含程序员面试大厂面试题和面试经验☆193Updated 5 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆146Updated last month
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆65Updated 8 months ago
- ☆115Updated last year
- 多模态 MM +Chat 合集☆278Updated 2 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆194Updated 4 months ago
- ☆213Updated last year
- pytorch distribute tutorials☆156Updated 5 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆751Updated 6 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆264Updated 8 months ago
- ☆971Updated 3 weeks ago