bobo0810 / LearnDeepSpeedLinks
DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)
☆173Updated last year
Alternatives and similar repositories for LearnDeepSpeed
Users that are interested in LearnDeepSpeed are comparing it to the libraries listed below
Sorting:
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆219Updated last year
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆64Updated 11 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆122Updated 8 months ago
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆540Updated 3 weeks ago
- ☆699Updated 3 weeks ago
- DeepSpeed Tutorial☆100Updated 11 months ago
- Efficient Multimodal Large Language Models: A Survey☆362Updated 3 months ago
- ☆361Updated 5 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆244Updated 2 months ago
- ☆198Updated 3 months ago
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆114Updated last year
- 多模态 MM +Chat 合集☆273Updated 2 months ago
- ☆104Updated last year
- ☆91Updated 10 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆138Updated 3 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆329Updated last year
- Cool Papers - Immersive Paper Discovery☆584Updated 2 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆138Updated 3 months ago
- llm & rl☆176Updated this week
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆360Updated last year
- pytorch distribute tutorials☆143Updated last month
- ☆112Updated 8 months ago
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆463Updated 4 months ago
- 包含程序员面试大厂面试题和面试经验☆166Updated 2 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆446Updated 6 months ago
- ☆204Updated 9 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆707Updated 3 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆166Updated 3 weeks ago
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆65Updated 5 months ago
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆155Updated 9 months ago