HFAiLab / hfai-modelsLinks
HFAI deep learning models
☆156Updated 2 years ago
Alternatives and similar repositories for hfai-models
Users that are interested in hfai-models are comparing it to the libraries listed below
Sorting:
- ☆79Updated 2 years ago
- FireFlyer Record file format, writer and reader for DL training samples.☆237Updated 3 years ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆721Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆70Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆328Updated 7 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 4 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆444Updated last month
- Tutorial for Ray☆36Updated last year
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- FlagScale is a large model toolkit based on open-sourced projects.☆425Updated last week
- 青稞Talk☆175Updated last week
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆92Updated 3 months ago
- ☆115Updated last year
- ☆219Updated 2 years ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆274Updated 10 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Updated last year
- Implementation of FlashAttention in PyTorch☆176Updated 11 months ago
- ☆29Updated last year
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆346Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆122Updated 2 years ago
- 📑 Dive into Big Model Training☆116Updated 3 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆173Updated last week
- Best practice for training LLaMA models in Megatron-LM☆664Updated last year
- ATC23 AE☆47Updated 2 years ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆248Updated last year
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆245Updated 4 months ago
- ☆49Updated this week