sxontheway / Keep-LearningLinks
The record of what I‘ve been through. Now moved to Notion. See link below
☆101Updated 10 months ago
Alternatives and similar repositories for Keep-Learning
Users that are interested in Keep-Learning are comparing it to the libraries listed below
Sorting:
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 4 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆183Updated 2 years ago
- Collaborative Training of Large Language Models in an Efficient Way☆417Updated last year
- The pure and clear PyTorch Distributed Training Framework.☆274Updated last year
- An awesome gpu tasks scheduler. 轻量好用的GPU机群任务调度工具。觉得有用可以点个star☆193Updated 3 years ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆285Updated 2 years ago
- ☆51Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- ☆79Updated last year
- Model Compression for Big Models☆165Updated 2 years ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆355Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆70Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆663Updated last year
- Inference code for LLaMA models☆128Updated 2 years ago
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆127Updated last year
- ☆115Updated last year
- 一款便捷的抢占显卡脚本☆382Updated 10 months ago
- ☆36Updated 11 months ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆262Updated last year
- Models and examples built with OneFlow☆100Updated last year
- 青稞Talk☆173Updated last week
- The Roadmap for LLMs☆86Updated 2 years ago
- ☆215Updated 2 weeks ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 7 months ago
- 更纯粹、更高压缩率的Tokenizer☆486Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆225Updated 2 years ago
- A light-weight script for maintaining a LOT of machine learning experiments.☆92Updated 3 years ago
- ☆84Updated 2 years ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆363Updated 2 years ago
- mindspore implementation of transformers☆68Updated 2 years ago