HFAiLab / hfai-modelsLinks
HFAI deep learning models
☆158Updated 2 years ago
Alternatives and similar repositories for hfai-models
Users that are interested in hfai-models are comparing it to the libraries listed below
Sorting:
- ☆79Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- FireFlyer Record file format, writer and reader for DL training samples.☆238Updated 3 years ago
- A flexible and efficient training framework for large-scale alignment tasks☆447Updated 3 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆737Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆195Updated last year
- ☆219Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆251Updated last year
- Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature se…☆104Updated this week
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆331Updated 9 months ago
- Tutorial for Ray☆36Updated last year
- ☆74Updated this week
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆79Updated 11 months ago
- 青稞Talk☆190Updated 2 weeks ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆418Updated 5 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆101Updated 5 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆58Updated 5 months ago
- 📑 Dive into Big Model Training☆116Updated 3 years ago
- Best practice for training LLaMA models in Megatron-LM☆664Updated 2 years ago
- ☆29Updated last year
- Accelerate inference without tears☆372Updated 2 weeks ago
- ☆115Updated last year
- 配合 HAI Platform 使用的集成化用户界面☆54Updated 2 years ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆346Updated last year
- Implementation of FlashAttention in PyTorch☆180Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆340Updated 11 months ago
- The test of different distributed-training methods on High-Flyer AIHPC☆27Updated 3 years ago