Ascend / AscendSpeedLinks
☆79Updated 2 years ago
Alternatives and similar repositories for AscendSpeed
Users that are interested in AscendSpeed are comparing it to the libraries listed below
Sorting:
- ☆130Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 3 years ago
- ☆96Updated 10 months ago
- ☆155Updated 11 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆110Updated 10 months ago
- ☆74Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- ☆141Updated last year
- ☆16Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆251Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Updated last year
- Models and examples built with OneFlow☆101Updated last year
- ☆219Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- ☆125Updated last year
- Distributed IO-aware Attention algorithm☆24Updated 4 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 6 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- ☆206Updated 9 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆184Updated last month
- Sequence-level 1F1B schedule for LLMs.☆19Updated last year
- ☆23Updated this week
- Built upon Megatron-Deepspeed and HuggingFace Trainer, EasyLLM has reorganized the code logic with a focus on usability. While enhancing …☆49Updated last year
- A simple calculation for LLM MFU.☆66Updated 5 months ago