Ascend / AscendSpeedLinks
☆79Updated last year
Alternatives and similar repositories for AscendSpeed
Users that are interested in AscendSpeed are comparing it to the libraries listed below
Sorting:
- ☆127Updated 5 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Models and examples built with OneFlow☆97Updated 8 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆96Updated last year
- ☆86Updated 2 months ago
- ☆16Updated last year
- ☆139Updated last year
- ☆51Updated last week
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆63Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆53Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆127Updated 5 months ago
- ☆141Updated 3 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆100Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- ☆219Updated last year
- ☆119Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- FlagScale is a large model toolkit based on open-sourced projects.☆307Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- ☆84Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆384Updated this week
- ☆14Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago
- ☆23Updated last year
- A simple calculation for LLM MFU.☆38Updated 3 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆126Updated 2 months ago