void-main / fastertransformer_backendLinks
☆21Updated 2 years ago
Alternatives and similar repositories for fastertransformer_backend
Users that are interested in fastertransformer_backend are comparing it to the libraries listed below
Sorting:
- Transformer related optimization, including BERT, GPT☆59Updated last year
- ☆455Updated this week
- ☆128Updated 6 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆809Updated last month
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- LLM Inference benchmark☆421Updated 11 months ago
- Running BERT without Padding☆472Updated 3 years ago
- ☆411Updated last year
- export llama to onnx☆128Updated 6 months ago
- Best practice for training LLaMA models in Megatron-LM☆657Updated last year
- ☆139Updated last year
- ☆83Updated last year
- ☆220Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆259Updated last month
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆58Updated 11 months ago
- A LLaMA1/LLaMA12 Megatron implement.☆28Updated last year
- ☆195Updated 2 months ago
- A high-performance inference system for large language models, designed for production environments.☆451Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inference☆434Updated 2 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated 2 weeks ago
- A flexible and efficient training framework for large-scale alignment tasks☆385Updated this week
- ☆142Updated 4 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆321Updated this week
- ☆79Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆223Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Updated last year
- The Triton TensorRT-LLM Backend☆859Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆130Updated 6 months ago