sxontheway / Keep-LearningLinks
The record of what I‘ve been through.
☆101Updated 7 months ago
Alternatives and similar repositories for Keep-Learning
Users that are interested in Keep-Learning are comparing it to the libraries listed below
Sorting:
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆176Updated last year
- An awesome gpu tasks scheduler. 轻量好用的GPU机群任务调度工具。觉得有用可以点个star☆188Updated 3 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated last month
- A light-weight script for maintaining a LOT of machine learning experiments.☆92Updated 2 years ago
- ☆52Updated 2 years ago
- The pure and clear PyTorch Distributed Training Framework.☆274Updated last year
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆156Updated 10 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆338Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆283Updated 2 years ago
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆117Updated last year
- 一款便捷的抢占显卡脚本☆352Updated 7 months ago
- ☆36Updated 8 months ago
- ☆113Updated 9 months ago
- ☆208Updated 10 months ago
- 更纯粹、更高压缩率的Tokenizer☆481Updated 9 months ago
- Inference code for LLaMA models☆123Updated 2 years ago
- DeepSpeed Tutorial☆101Updated last year
- 青稞Talk☆135Updated last week
- A brief of TorchScript by MNIST☆112Updated 3 years ago
- real Transformer TeraFLOPS on various GPUs☆915Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆67Updated 2 years ago
- Survey Paper List - Efficient LLM and Foundation Models☆253Updated 11 months ago
- ☆79Updated last year
- The Roadmap for LLMs☆86Updated 2 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆605Updated last week
- ☆260Updated 5 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 4 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- 一个用于学习的仿Pytorch纯Python实现的自动求导工具。☆51Updated last year