HFAiLab / hfai-models
HFAI deep learning models
☆147Updated last year
Alternatives and similar repositories for hfai-models
Users that are interested in hfai-models are comparing it to the libraries listed below
Sorting:
- FireFlyer Record file format, writer and reader for DL training samples.☆223Updated 2 years ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆643Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- ☆79Updated last year
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆296Updated 3 weeks ago
- A flexible and efficient training framework for large-scale alignment tasks☆346Updated 3 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆62Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆190Updated 2 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆209Updated last month
- VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework☆326Updated this week
- ☆216Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆403Updated last week
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆51Updated 6 months ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆348Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆194Updated 3 weeks ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆111Updated last year
- Mixture-of-Experts (MoE) Language Model☆186Updated 8 months ago
- ☆29Updated 8 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆176Updated last month
- A simple calculation for LLM MFU.☆38Updated 2 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆34Updated 2 weeks ago
- ☆132Updated 2 months ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆61Updated 2 months ago
- Zero Bubble Pipeline Parallelism☆389Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆492Updated 3 weeks ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated last month
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- ☆184Updated last month
- The test of different distributed-training methods on High-Flyer AIHPC☆24Updated 2 years ago