HFAiLab / hfai-modelsLinks
HFAI deep learning models
☆155Updated 2 years ago
Alternatives and similar repositories for hfai-models
Users that are interested in hfai-models are comparing it to the libraries listed below
Sorting:
- ☆79Updated 2 years ago
- FireFlyer Record file format, writer and reader for DL training samples.☆238Updated 3 years ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆733Updated 2 years ago
- ☆219Updated 2 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆447Updated 3 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆251Updated last year
- An industrial extension library of pytorch to accelerate large scale model training☆58Updated 5 months ago
- 青稞Talk☆190Updated last week
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆664Updated 2 years ago
- FlagScale is a large model toolkit based on open-sourced projects.☆471Updated this week
- Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature se…☆102Updated this week
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆330Updated 9 months ago
- Accelerate inference without tears☆372Updated last week
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆417Updated 5 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆195Updated last year
- Efficient AI Inference & Serving☆480Updated 2 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆339Updated 11 months ago
- ☆29Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆618Updated 3 months ago
- Examples of training models with hybrid parallelism using ColossalAI☆339Updated 2 years ago
- The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )☆231Updated last month
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆123Updated 2 years ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆79Updated 11 months ago
- Models and examples built with OneFlow☆101Updated last year
- ☆130Updated last year