THUDM / FasterTransformerLinks
Transformer related optimization, including BERT, GPT
☆39Updated 2 years ago
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- ☆128Updated 6 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- ☆139Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆259Updated last month
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆79Updated last year
- export llama to onnx☆128Updated 6 months ago
- ☆220Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆243Updated last year
- ☆53Updated last week
- ☆120Updated last year
- ☆455Updated this week
- An easy-to-use package for implementing SmoothQuant for LLMs☆102Updated 3 months ago
- ☆195Updated 2 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- ☆21Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Models and examples built with OneFlow☆97Updated 8 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆809Updated last month
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- ☆90Updated 2 years ago
- FlagScale is a large model toolkit based on open-sourced projects.☆321Updated this week
- ☆142Updated 4 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 4 months ago
- LLM Inference benchmark☆421Updated 11 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated 2 weeks ago
- Running BERT without Padding☆472Updated 3 years ago
- ☆59Updated 7 months ago