sxontheway / Keep-LearningLinks
The record of what I‘ve been through.
☆99Updated 5 months ago
Alternatives and similar repositories for Keep-Learning
Users that are interested in Keep-Learning are comparing it to the libraries listed below
Sorting:
- ☆52Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆96Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆63Updated last year
- ☆36Updated last year
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆169Updated last year
- ATC23 AE☆45Updated 2 years ago
- ☆79Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆282Updated last year
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆102Updated last year
- ☆84Updated last year
- Survey Paper List - Efficient LLM and Foundation Models☆248Updated 9 months ago
- A light-weight script for maintaining a LOT of machine learning experiments.☆91Updated 2 years ago
- ☆201Updated 8 months ago
- export llama to onnx☆126Updated 5 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆127Updated this week
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- The blog, read report and code example for AGI/LLM related knowledge.☆40Updated 4 months ago
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆154Updated 8 months ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆207Updated last year
- DeepSpeed Tutorial☆97Updated 10 months ago
- ☆90Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆300Updated 3 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆454Updated 10 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated this week
- Collaborative Training of Large Language Models in an Efficient Way☆415Updated 9 months ago
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆118Updated 2 years ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆89Updated 2 months ago
- Inference code for LLaMA models☆121Updated last year
- An awesome gpu tasks scheduler. 轻量好用的GPU机群任务调度工具。觉得有用可以点个star☆184Updated 2 years ago