Rayrtfr / FasterTransformer
Transformer related optimization, including BERT, GPT
☆17Updated last year
Alternatives and similar repositories for FasterTransformer:
Users that are interested in FasterTransformer are comparing it to the libraries listed below
- ☆127Updated 3 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆139Updated 11 months ago
- ☆78Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆96Updated last week
- 使用 CUDA C++ 实现的 llama 模型推理框架☆49Updated 5 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated last month
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Updated last year
- ☆58Updated 4 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- OneFlow Serving☆20Updated last week
- ☆78Updated 3 weeks ago
- ☆16Updated last year
- ☆28Updated 2 months ago
- simplify >2GB large onnx model☆55Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated 2 weeks ago
- ☆131Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆111Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated 10 months ago
- OneFlow->ONNX☆43Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆39Updated 8 months ago
- ☆43Updated 3 weeks ago
- ☆90Updated last year
- ☆148Updated 3 months ago
- ☆23Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆61Updated last year