Ascend / AscendSpeedLinks
☆79Updated 2 years ago
Alternatives and similar repositories for AscendSpeed
Users that are interested in AscendSpeed are comparing it to the libraries listed below
Sorting:
- ☆130Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- ☆16Updated last year
- Models and examples built with OneFlow☆100Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆70Updated 2 years ago
- ☆154Updated 9 months ago
- ☆97Updated 9 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 4 months ago
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- ☆122Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆110Updated 8 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- ☆68Updated this week
- ☆141Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- ☆219Updated 2 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆174Updated last week
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆270Updated 4 months ago
- Sequence-level 1F1B schedule for LLMs.☆18Updated last year
- ATC23 AE☆47Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆219Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- Triton implementation of Flash Attention2.0☆47Updated 2 years ago
- FlagScale is a large model toolkit based on open-sourced projects.☆426Updated last week